You can see the paper right here (“Synthetic recombinase-based state machines in living cells,” by Nathaniel Roquet, Ava P. Soleimany, Alyssa C. Ferris, Scott Aaronson, and Timothy K. Lu). Unfortunately there’s a paywall, but I *think* we’ll be able to post our own version before long (will check). In the meantime, you can read the *MIT News* article (“Scientists program cells to remember and respond to series of stimuli”). In any case, *my* little part of the paper will be fully explained in this post.

A little over a year ago, two MIT synthetic biologists—Timothy Lu and his PhD student Nate Roquet—came to my office saying they had a problem they wanted help with. *Why me?* I wondered. Didn’t they realize I was a quantum complexity theorist, who so hated picking apart owl pellets and memorizing the names of cell parts in junior-high Life Science, that he avoided taking a single biology course since that time? (Not counting computational biology, taught in a CS department by Richard Karp.)

Nevertheless, I listened to my biologist guests—which turned out to be an excellent decision.

Tim and Nate told me about a DNA system with surprisingly clear rules, which led them to a strange but elegant combinatorial problem. In this post, first I need to spend some time to tell you the rules; then I can tell you the problem, and lastly its solution. There are no mathematical prerequisites for this post, and *certainly* no biology prerequisites: everything will be completely elementary, like learning a card game. Pen and paper might be helpful, though.

As we all learn in kindergarten, DNA is a finite string over the 4-symbol alphabet {A,C,G,T}. We’ll find it more useful, though, to think in terms of entire *chunks* of DNA bases, which we’ll label arbitrarily with letters like X, Y, and Z. For example, we might have X=ACT, Y=TAG, and Z=GATTACA.

We can also *invert* one of these chunks, which means writing it backwards while also swapping the A’s with T’s and the G’s with C’s. We’ll denote this operation by * (the technical name in biology is “reverse-complement”). For example:

X*=AGT, Y*=CTA, Z*=TGTAATC.

Note that (X*)*=X.

We can then combine our chunks and their inverses into a longer DNA string, like so:

ZYX*Y* = GATTACA TAG AGT CTA.

From now on, we’ll work exclusively with the chunks, and forget completely about the underlying A’s, C’s, G’s, and T’s.

Now, there are also certain special chunks of DNA bases, called *recognition sites*, which tell the little machines that read the DNA when they should start doing something and when they should stop. Recognition sites come in pairs, so we’ll label them using various parenthesis symbols like ( ), [ ], { }. To convert a parenthesis into its partner, you invert it: thus ( = )*, [ = ]*, { = }*, etc. Crucially, the parentheses in a DNA string don’t need to “face the right ways” relative to each other, and they also don’t need to nest properly. Thus, both of the following are valid DNA strings:

X ( Y [ Z [ U ) V

X { Y ] Z { U [ V

Let’s refer to X, Y, Z, etc.—the chunks that aren’t recognition sites—as *letter-chunks*. Then it will be convenient to make the following simplifying assumptions:

- Our DNA string consists of an alternating sequence of recognition sites and letter-chunks, beginning and ending with letter-chunks. (If this weren’t true, then we could just glom together adjacent recognition sites and adjacent letter-chunks, and/or add new dummy chunks, until it
*was*true.) - Every letter-chunk that appears in the DNA string appears exactly once (either inverted or not), while every recognition site that appears, appears exactly twice. Thus, if there are n distinct recognition sites, there are 2n+1 distinct letter-chunks.
- Our DNA string can be decomposed into its constituent chunks
*uniquely*—i.e., it’s always possible to tell which chunk we’re dealing with, and when one chunk stops and the next one starts. In particular, the chunks and their reverse-complements are all distinct strings.

The little machines that read the DNA string are called *recombinases*. There’s one kind of recombinase for each kind of recognition site: a (-recombinase, a [-recombinase, and so on. When, let’s say, we let a (-recombinase loose on our DNA string, it searches for (‘s and )’s and ignores everything else. Here’s what it does:

- If there are no (‘s or )’s in the string, or only one of them, it does nothing.
- If there are two (‘s facing the same way—like ( ( or ) )—it deletes everything in between them, including the (‘s themselves.
- If there are two (‘s facing opposite ways—like ( ) or ) (—it deletes the (‘s, and inverts everything in between them.

Let’s see some examples. When we apply [-recombinase to the string

A ( B [ C [ D ) E,

we get

A ( B D ) E.

When we apply (-recombinase to the same string, we get

A D* ] C* ] B* E.

When we apply *both* recombinases (in either order), we get

A D* B* E.

Another example: when we apply {-recombinase to

A { B ] C { D [ E,

we get

A D [ E.

When we apply [-recombinase to the same string, we get

A { B D* } C* E.

When we apply both recombinases—ah, but here the order matters! If we apply { first and then [, we get

A D [ E,

since the [-recombinase now encounters only a single [, and has nothing to do. On the other hand, if we apply [ first and then {, we get

A D B* C* E.

Notice that inverting a substring can change the relative orientation of two recognition sites—e.g., it can change { { into { } or vice versa. It can thereby change what happens (inversion or deletion) when some future recombinase is applied.

One final rule: after we’re done applying recombinases, we remove the remaining recognition sites like so much scaffolding, leaving only the letter-chunks. Thus, the final output

A D [ E

becomes simply A D E, and so on. Notice also that, if we happen to delete one recognition site of a given type while leaving its partner, the remaining site will *necessarily* just bounce around inertly before getting deleted at the end—so we might as well “put it out of its misery,” and delete it right away.

My coauthors have actually implemented all of this in a wet lab, which is what most of the *Science* paper is about (my part is mostly in a technical appendix). They think of what they’re doing as building a “biological state machine,” which could have applications (for example) to programming cells for medical purposes.

But without further ado, let me tell you the math question they gave me. For reasons that they can explain better than I can, my coauthors were interested in the *information storage capacity* of their biological state machine. That is, they wanted to know the answer to the following:

Suppose we have a fixed initial DNA string, with n pairs of recognition sites and 2n+1 letter-chunks; and we also have a recombinase for each type of recognition site. Then by choosing which recombinases to apply, as well as which order to apply them in, how many different DNA strings can we generate as output?

It’s easy to construct an example where the answer is as large as 2^{n}. Thus, if we consider a starting string like

A ( B ) C [ D ] E { F } G < H > I,

we can clearly make 2^{4}=16 different output strings by choosing which subset of recombinases to apply and which not. For example, applying [, {, and < (in any order) yields

A B C D* E F* G H* I.

There are also cases where the number of distinct outputs is less than 2^{n}. For example,

A ( B [ C [ D ( E

can produce only 3 outputs—A B C D E, A B D E, and A E—rather than 4.

What Tim and Nate wanted to know was: can the number of distinct outputs ever be *greater* than 2^{n}?

Intuitively, it seems like the answer “has to be” yes. After all, we already saw that the order in which recombinases are applied can matter enormously. And given n recombinases, the number of possible permutations of them is n!, not 2^{n}. (Furthermore, if we remember that any *subset* of the recombinases can be applied in any order, the number of possibilities is even a bit greater—about e·n!.)

Despite this, when my coauthors played around with examples, they found that the number of distinct output strings never exceeded 2^{n}. In other words, the number of output strings behaved *as if* the order didn’t matter, even though it does. The problem they gave me was either to explain this pattern or to find a counterexample.

I found that the pattern holds:

**Theorem:** Given an initial DNA string with n pairs of recognition sites, we can generate at most 2^{n} distinct output strings by choosing which recombinases to apply and in which order.

Let a *recombinase sequence* be an ordered list of recombinases, each occurring at most once: for example, ([{ means to apply (-recombinase, then [-recombinase, then {-recombinase.

The proof of the theorem hinges on one main definition. Given a recombinase sequence that acts on a given DNA string, let’s call the sequence *irreducible* if every recombinase in the sequence actually finds two recognition sites (and hence, inverts or deletes a nonempty substring) when it’s applied. Let’s call the sequence *reducible* otherwise. For example, given

A { B ] C { D [ E,

the sequence [{ is irreducible, but {[ is reducible, since the [-recombinase does nothing.

Clearly, for every reducible sequence, there’s a shorter sequence that produces the same output string: just omit the recombinases that don’t do anything! (On the other hand, I leave it as an exercise to show that the converse is false. That is, even if a sequence is *ir*reducible, there might be a shorter sequence that produces the same output string.)

**Key Lemma:** Given an initial DNA string, and given a subset of k recombinases, every irreducible sequence composed of all k of those recombinases produces the same output string.

Assuming the Key Lemma, let’s see why the theorem follows. Given an initial DNA string, suppose you want to specify one of its possible output strings. I claim you can do this using only n bits of information. For you just need to specify which subset of the n recombinases you want to apply, in *some* irreducible order. Since every irreducible sequence of those recombinases leads to the same output, you don’t need to specify an order on the subset. Furthermore, for each possible output string S, there must be *some* irreducible sequence that leads to S—given a reducible sequence for S, just keep deleting irrelevant recombinases until no more are left—and therefore some subset of recombinases you could pick that uniquely determines S. OK, but if you can specify each S uniquely using n bits, then there are at most 2^{n} possible S’s.

**Proof of Key Lemma.** Given an initial DNA string, let’s assume for simplicity that we’re going to apply all n of the recombinases, in some irreducible order. We claim that the final output string doesn’t depend at all on *which* irreducible order we pick.

If we can prove this claim, then the lemma follows, since given a proper subset of the recombinases, say of size k<n, we can simply glom together everything between one relevant recognition site and the next one, treating them as 2k+1 giant letter-chunks, and then repeat the argument.

Now to prove the claim. Given two letter-chunks—say A and B—let’s call them *soulmates* if either A and B or A* and B* will necessarily end up next to each other, whenever all n recombinases are applied in some irreducible order, and whenever A or B appears at all in the output string. Also, let’s call them *anti-soulmates* if either A and B* or A* and B will necessarily end up next to each other if either appears at all.

To illustrate, given the initial DNA sequence,

A [ B ( C ] D ( E,

you can check that A and C are anti-soulmates. Why? Because if we apply all the recombinases in an irreducible sequence, then at some point, the [-recombinase needs to get applied, and it needs to find both [ recognition sites. And one of these recognition sites will still be next to A, and the other will still be next to C (for what could have pried them apart? nothing). And when that happens, no matter where C has traveled in the interim, C* must get brought next to A. If the [-recombinase does an inversion, the transformation will look like

A [ … C ] → A C* …,

while if it does a deletion, the transformation will look like

A [ … [ C* → A C*

Note that C’s [ recognition site will be to its left, if and only if C has been flipped to C*. In this particular example, A never moves, but if it did, we could repeat the analysis for A and *its* [ recognition site. The conclusion would be the same: no matter what inversions or deletions we do first, we’ll maintain the invariant that A and C* (or A* and C) will immediately jump next to each other, as soon as the [ recombinase is applied. And once they’re next to each other, nothing will ever separate them.

Similarly, you can check that C and D are soulmates, connected by the ( recognition sites; D and B are anti-soulmates, connected by the [ sites; and B and E are soulmates, connected by the ( sites.

More generally, let’s consider an arbitrary DNA sequence, with n pairs of recognition sites. Then we can define a graph, called the *soulmate graph*, where the 2n+1 letter-chunks are the vertices, and where X and Y are connected by (say) a blue edge if they’re soulmates, and by a red edge if they’re anti-soulmates.

When we construct this graph, we find that every vertex has exactly 2 neighbors, one for each recognition site that borders it—save the first and last vertices, which border only one recognition site each and so have only one neighbor each. But these facts immediately determine the structure of the graph. Namely, it must consist of a simple *path*, starting at the first letter-chunk and ending at the last one, together with possibly a disjoint union of cycles.

But we know that the first and last letter-chunks can never move anywhere. For that reason, a path of soulmates and anti-soulmates, starting at the first letter-chunk and ending at the last one, *uniquely determines* the final output string, when the n recombinases are applied in any irreducible order. We just follow it along, switching between inverted and non-inverted letter-chunks whenever we encounter a red edge. The cycles contain the letter-chunks that necessarily get deleted along the way to that unique output string. This completes the proof of the lemma, and hence the theorem.

There are other results in the paper, like a generalization to the case where there can be k pairs of recognition sites of each type, rather than only one. In that case, we can prove that the number of distinct output strings is at most 2^{kn}, and that it *can* be as large as ~(2k/3e)^{n}. We don’t know the truth between those two bounds.

Why is this interesting? As I said, my coauthors had their own reasons to care, involving the number of bits one can store using a certain kind of DNA state machine. I got interested for a different reason: because this is a case where biology threw up a bunch of rules that *look* like a random mess—the parentheses don’t even need to nest correctly? inversion can *also* change the semantics of the recognition sites? evolution never thought about what happens if you delete one recognition site while leaving the other one?—and yet, on analysis, all the rules work in perfect harmony to produce a certain outcome. Change a single one of them, and the “at most 2^{n} distinct DNA sequences” theorem would be false. Mind you, I’m still not sure what biological *purpose* it serves for the rules to work in harmony this way, but they do.

But the point goes further. While working on this problem, I’d repeatedly encounter an aspect of the mathematical model that seemed weird and inexplicable—only to have Tim and Nate explain that the aspect made sense once you brought in additional facts from biology, facts not in the model they gave me. As an example, we saw that in the soulmate graph, the deleted substrings appear as cycles. But surely excised DNA fragments don’t literally form loops? Why yes, apparently, they do. As a second example, consider the DNA string

A ( B [ C ( D [ E.

When we construct the soulmate graph for this string, we get the path

A–D–C–B–E.

Yet there’s no actual recombinase sequence that leads to A D C B E as an output string! Thus, we see that it’s possible to have a “phantom output,” which the soulmate graph suggests should be reachable but that *isn’t* actually reachable. According to my coauthors, that’s because the “phantom outputs” *are* reachable, once you know that in real biology (as opposed to the mathematical model), excised DNA fragments can also reintegrate back into the long DNA string.

Many of my favorite open problems about this model concern algorithms and complexity. For example: given as input an initial DNA string, does there *exist* an irreducible order in which the recombinases can be applied? Is the “utopian string”—the string suggested by the soulmate graph—actually reachable? If it *is* reachable, then what’s the shortest sequence of recombinases that reaches it? Are these problems solvable in polynomial time? Are they NP-hard? More broadly, if we consider all the subsets of recombinases that can be applied in an irreducible order, or all the irreducible orders themselves, what combinatorial conditions do they satisfy? I don’t know—if you’d like to take a stab, feel free to share what you find in the comments!

What I do know is this: I’m fortunate that, before they publish your first biology paper, the editors at *Science* don’t call up your 7^{th}-grade Life Science teacher to ask how you did in the owl pellet unit.

**Unrelated Announcement from My Friend Julia Wise (July 24):** Do you like science and care about humanity’s positive trajectory? July 25 is the final day to apply for Effective Altruism Global 2016. From August 5-7 at UC Berkeley, a network of founders, academics, policy-makers, and more will gather to apply economic and scientific thinking to the world’s most important problems. Last year featured Elon Musk and the head of Google.org. This year will be headlined by Cass Sunstein, the co-author of Nudge. If you apply with this link, the organizers will give you a free copy of Doing Good Better by Will MacAskill. Scholarships are available for those who can’t afford the cost. Read more here. Apply here.

Here’s the summary:

This mini-course will introduce participants to an exciting frontier for quantum computing theory: namely, questions involving the computational complexity of preparing a certain quantum state or applying a certain unitary transformation. Traditionally, such questions were considered in the context of the Nonabelian Hidden Subgroup Problem and quantum interactive proof systems, but they are much broader than that. One important application is the problem of “public-key quantum money” – that is, quantum states that can be authenticated by anyone, but only created or copied by a central bank – as well as related problems such as copy-protected quantum software. A second, very recent application involves the black-hole information paradox, where physicists realized that for certain conceptual puzzles in quantum gravity, they needed to know whether certain states and operations had exponential quantum circuit complexity. These two applications (quantum money and quantum gravity) even turn out to have connections to each other! A recurring theme of the course will be the quest to relate these novel problems to more traditional computational problems, so that one can say, for example, “this quantum money is hard to counterfeit if that cryptosystem is secure,” or “this state is hard to prepare if PSPACE is not in PP/poly.” Numerous open problems and research directions will be suggested, many requiring only minimal quantum background. Some previous exposure to quantum computing and information will be assumed, but a brief review will be provided.

If you still haven’t decided whether to tackle this thing: it’s basically a quantum complexity theory textbook (well, a textbook for certain *themes* within quantum complexity theory) that I’ve written and put on the Internet for free. It has explanations of lots of published results both old and new, but also some results of mine (e.g., about private-key quantum money, firewalls, and AdS/CFT) that I shamefully haven’t yet written up as papers, and that therefore aren’t currently available anywhere else. If you’re interested in certain specific topics—for example, only quantum money, or only firewalls—you should be able to skip around in the notes without too much difficulty.

Thanks *so much* to Denis Therien for organizing the mini-course, Anil Ada for managing the scribe notes effort, my PhD students Adam Bouland and Luke Schaeffer for their special guest lecture (the last one), and finally, the course attendees for their constant questions and interruptions, and (of course) for scribing.

And in case you were wondering: yes, I’ll do absolutely *anything* for science, even if it means teaching a weeklong course in Barbados! Lest you consider this a pure island boondoggle, please know that I spent probably 12-14 hours per day either lecturing (in two 3-hour installments) or preparing for the lectures, with little sleep and just occasional dips in the ocean.

And now I’m headed to the Perimeter Institute for their It from Qubit summer school, not at all unrelated to my Barbados lectures. This time, though, it’s thankfully other people’s turns to lecture…

]]>Yes, that’s correct: the Call for Papers for the 2017 Innovations in Theoretical Computer Science (ITCS) conference, to be held in Berkeley this coming January 9-11, is finally up. I attended ITCS’2015 in Rehovot, Israel and had a blast, and will attend ITCS’2017 if logistics permit.

But that’s not all: in a *Shtetl-Optimized* exclusive, the legendary Christos Papadimitriou, coauthor of the acclaimed Logicomix and ITCS’2017 program chair, has written us a guest post about what makes ITCS special and why you should come. Enjoy! –SA

**ITCS: A hidden treasure of TCS**

by Christos Papadimitriou

Conferences, for me, are a bit like demonstrations: they were fun in the 1970s. There was the Hershey STOC, of course, and that great FOCS in Providence, plus a memorable database theory gathering in Calabria. Ah, children, you should have been there…

So, even though I was a loyal supporter of the ITCS idea from the beginning – the “I”, you recall, stands for *innovation* –, I managed to miss essentially all of them – except for those that left me no excuse. For example, this year the program committee was unreasonably kind to my submissions, and so this January I was in Boston to attend.

I want to tell you about ITCS 2016, because it was a gas.

First, I saw all the talks. I cannot recall this ever happening to me before. I reconnected with fields of old, learned a ton, and got a few cool new ideas.

In fact, I believe that there was no talk with fewer than 60 people in the audience – and that’s about 70% of the attendees. In most talks it was closer to 90%. When was the last conference where you saw that?

And what is the secret of this enhanced audience attention? One explanation is that smaller conference means small auditorium. Listening to the talk no longer feels like watching a concert in a stadium, or an event on TV, it’s more like a story related by a friend. Another gimmick that works well is that, at ITCS, session chairs start the session with a 10-minute “rant,” providing context and their own view of the papers in the session.

Our field got a fresh breath of cohesion at ITCS 2016: cryptographers listened to game theorists in the presence of folks who do data structures for a living, or circuit complexity – for a moment there, the seventies were back.

Ah, those papers, their cleverness and diversity and freshness! Here is a sample of a few with a brief comment for each (take a look at the conference website for the papers and the presentations).

- What is keeping quantum computers from conquering all of NP? It is the problem with destructive measurements, right? Think again, say Aaronson, Bouland and Fitzsimons. In their paper (pdf, slides) they consider several deviations from current restrictions, including non-destructive measurements, and the space ‘just above’ BQP turns out to be a fascinating and complex place.

- Roei Tell (pdf, slides) asks another unexpected question: when is an object far from being far from having a property? On the way to an answer he discovers a rich and productive duality theory of property testing, as well as a very precise and sophisticated framework in which to explore it.

- If you want to represent the permanent of a matrix as the determinant of another matrix of linear forms in the entries, how large must this second matrix be? – an old question by Les Valiant. The innovation by Landsberg and Ressayre (pdf, slides) is that they make fantastic progress in this important problem through geometric complexity: If certain natural symmetries are to be satisfied, the answer is exponential!

(A parenthesis: The last two papers make the following important point clear: *Innovation* in ITCS is *not* meant to be the antithesis of mathematical sophistication. Deep math and methodological innovation are key ingredients of the ITCS culture.)

- When shall we find an explicit function requiring more than 3
*n*gates? In their brave exploration of new territory for circuit complexity, Golovnev and Kulikov (pdf, slides) find one possible answer: “as soon as we have explicit dispersers for quadratic varieties.”

- The student paper award went to Aviad Rubinstein for his work (pdf) on auctioning multiple items – the hardest nut in algorithmic mechanism design. He gives a PTAS for optimizing over a large – and widely used – class of “partitioning” heuristics.

Even though there were no lively discussions at the lobby during the sessions – too many folks attending, see? – the interaction was intense and enjoyable during the extra long breaks and the social events.

Plus we had the Graduating Bits night, when the youngest among us get 5 minutes to tell. I would have traveled to Cambridge just for that!

All said, ITCS 2016 was a gem of a meeting. If you skipped it, you really missed a good one.

But there is no reason to miss **ITCS 2017**, let me tell you a few things about it:

- It will be in
**Berkeley**,**January 9 -11 2017,**the week before the Barcelona SODA.

- It will take place at the Simons Institute just a few days before the boot camps on Pseudorandomness and Learning.

- I volunteered to be
**program chair**, and the steering committee has decided to try a few innovations in the submission process:

**Submission deadline is mid-September**, so you have a few more weeks to collect your most innovative thoughts. Notification before the STOC deadline.

- Authors will post a copy of their paper, and
**will submit to the committee a statement about it,**say 1000 words max. Think of it as your chance to write a favorable referee report for your own paper! Telling the committee why you think it is interesting and innovative. If you feel this is self-evident, just tell us that.

- The committee members will be the judges of the overall worth and innovative nature of the paper. Sub-reviewers are optional, and their opinion is not communicated to the rest of the committee.

- The committee may invite speakers to present specific recent interesting work. Submitted papers especially liked by the committee may be elevated to “invited.”

- Plus Graduating Bits, chair rants, social program, not to mention the Simons Institute auditorium and Berkeley in January.

You should come!

]]>The above was, however, the title given to a fun panel discussion that Daniel Harlow, Brian Swingle, and I participated in on Wednesday evening, at the spectacular facility of the New York Academy of Sciences on the 40th floor of 7 World Trade Center in lower Manhattan. The moderator was George Musser of *Scientific American*. About 200 people showed up, some of whom we got to meet at the reception afterward.

(The link will take you to streaming video of the event, though you’ll need to scroll to 6:30 or so for the thing to start.)

The subject of the panel was the surprising recent connections between quantum information and quantum gravity, something that Daniel, Brian, and I all talked about different aspects of. I admitted at the outset that, not only was I not a real expert on the topic (as Daniel and Brian are), I wasn’t even a physicist, just a computer science humor mercenary or whatever the hell I am. I then proceeded, ironically, to explain the Harlow-Hayden argument for the computational hardness of creating a firewall, despite Harlow sitting right next to me (he chose to focus on something else). I was planning also to discuss Lenny Susskind’s conjecture relating the circuit complexity of quantum states to the AdS/CFT correspondence, but I ran out of time.

Thanks so much to my fellow participants, to George for moderating, and especially to Jennifer Costley, Crystal Ocampo, and everyone else at NYAS for organizing the event.

]]>Over the last decade, it’s been a privilege for me to get to know Lenny, to learn from him, and recently, to collaborate with him on quantum circuit complexity and AdS/CFT. Today, Lenny wrote to ask whether I’d share his open letter about the US election on this blog. Of course I said yes. Better yet, Lenny has agreed to my request to be available here to answer questions and comments. Lenny’s views, even when close to mine (as they certainly are in this case), are still *his*, and I’d never want to speak on his behalf. Better that you should hear it straight from the horse’s mouth—as you now will, without further ado. *–Scott A.*

**Letter to My Friends, by Leonard Susskind**

I’m watching this thing that’s happening with disbelief, dismay, and disgust. There is a lunatic loose—I’m sure we all agree about that—but I keep hearing people say that they can’t vote for Hillary. I heard it at my daughter’s birthday party Sunday. Boy oh boy, will these people be sorry if the lunatic gets his way. Personally I do not find it an excuse that “I live in California, which will go Democrat whatever I do.”

I strongly believe in all things Bernie, but Hillary is not the Anti-Bernie. There is much less difference between Clinton and Sanders than the distortions of the nominating process might lead people to think. She’s for health care, he’s for health care; he’s for increased minimum wage, she’s for increased minimum wage; she’s for immigrant rights, he’s for immigrant rights; and on and on it goes.

The lunatic may be just that—a lunatic—but he is also a master of smear and innuendo. He is a gigantic liar, and he knows that if you keep saying something over and over, it sticks in people’s minds. It’s called the Big Lie, and it works. Say it enough and it sows confusion and distrust, not only among the know-nothings, but even among those who know better.

The lunatic and his supporters are exceedingly dangerous. Tell your friends: don’t be fooled. The only thing between us and the lunatic is Hillary. Get off your ass and vote in Nov.

Leonard Susskind

Director, Stanford Institute for Theoretical Physics,

Stanford University

]]>

Earlier this month, William Slofstra, currently a Research Assistant Professor at the IQC in Waterloo, posted a breakthrough paper on the arXiv (yeah, I’m using the b-word again—sue me), which solves one version of a ten-year-old problem in entanglement theory called Tsirelson’s Problem. The problem, in one sentence, asks whether all quantum-mechanical correlations that can be achieved using commuting measurements, can also be achieved using measurements on separate parts of a tensor-product Hilbert space. The answer turns out to be **no**. (We’ve long known that the two kinds of correlations are identical as long as you stick to finite-dimensional Hilbert spaces, but Slofstra shows that they can differ in infinite-dimensional spaces.)

One implication of Slofstra’s result can be stated much more concretely, in terms of *two-prover games*: those things like the famous Bell/CHSH experiment, where Alice and Bob are put in separate rooms, and get inputs *x* and *y* respectively, and then without communicating, have to produce outputs *a* and *b* respectively satisfying some relation *V*(*x*,*y*,*a*,*b*). We’ve long known examples of two-prover games, like the Mermin-Peres magic square game, that can be won 100% of the time if Alice and Bob share quantum entanglement, but that can’t be won 100% of the time in a classical universe. Slofstra gives the first example of something different: namely, **a two-prover game that can be won 100% of the time using commuting measurements in an ****infinite-dimensional Hilbert space—something “formally within the rules of quantum mechanics”—but that can’t be won 100% of the time using any finite number of qubits of entanglement.**

(Previously, Leung, Toner, and Watrous had given a simpler example of such a game, but theirs required the referee to exchange quantum messages with Alice and Bob.)

If that’s not enough, Slofstra’s construction also shows that, given as input a description of a two-prover game, it’s *undecidable* (as in, equivalent to the halting problem) whether Alice and Bob can win the game with certainty using commuting measurements on an infinite-dimensional Hilbert space. Notoriously, quantum computing theorists have been unable to put *any* upper bound (not even “computable”) on the complexity class MIP*, consisting of languages that admit multi-prover interactive systems with entangled provers—precisely because they’ve been unable to bound how much entanglement the provers might need to implement their optimal strategy. Slofstra’s result helps to explain why this problem has been so vexing. I hasten to add, though, that his result doesn’t imply that MIP* contains anything uncomputable, since it remains plausible that anything Alice and Bob can do with infinite entanglement, they can approximate well enough with a finite amount.

That last remark leads to a further fundamental question, one that Slofstra leaves open. Namely, even if Alice and Bob need infinite entanglement to win Slofstra’s game with certainty, can they at least win it with probability *arbitrarily close* to 100%, using larger and larger finite amounts of entanglement? More broadly, could there exist a game that was winnable with certainty using infinite entanglement, but with at most (say) 90% probability using *any* finite amount of entanglement? That problem was shown, by Ozawa (see also Scholz and Werner), to be equivalent to a famous unsolved problem in operator algebras called the Connes embedding problem.

Clarifying the matter further, Slofstra (following earlier authors) points out that there are really *four* classes of two-prover games in play here:

- Games that can be won with certainty using some fixed, finite amount of entanglement.
- Games that can be won with certainty using an infinite amount of entanglement, but still in a tensor-product Hilbert space, H
_{A}⊗H_{B}. - Games that can be won with probability
*approaching*1, using an infinite sequence of strategies from class 1, or equivalently (as it turns out) from class 2. - Games that can be won with certainty using measurements by Alice and Bob on an infinite-dimensional quantum state |ψ〉, where we require all of Alice’s measurements to commute with all of Bob’s, but don’t require |ψ〉 to have a tensor-product structure.

It can be shown that 1 is a subset of 2 is a subset of 3 is a subset of 4. Previously, we didn’t know *any* of these containments to be strict. Slofstra’s result shows that class 2 differs from class 4—and as a consequence, that class 1 differs from class 4 as well. The Connes embedding problem, which remains open, asks whether 3 differs from 4. It also remains open whether 1 differs from 2 and whether 2 differs from 3.

OK, you ask, but what’s the broader importance of any of this? To me, these problems touch on a question of almost metaphysical significance: namely, *what sorts of experimental evidence could possibly bear on whether the universe was discrete or continuous?*

Because of the Bekenstein bound from quantum gravity, I’m of the opinion that the Hilbert spaces relevant to our universe are ultimately finite-dimensional—or more concretely, that any bounded physical system can store at most ~10^{69} qubits per square meter of surface area. And in quantum computing and information, almost everything we care about only requires finite-dimensional Hilbert spaces—the subject of this blog post being a striking exception!

Yet if you take a traditional quantum mechanics course, virtually every example you see will involve *infinite*-dimensional Hilbert spaces—starting with the harmonic oscillator, the hydrogen atom, and coherent states of light. And indeed, when I’ve banged the drum about finite-dimensional QM being the truly fundamental kind, physicists have often retorted by pointing to one of the very first things they learn: the position/momentum commutation relation, which only makes sense in infinite-dimensional Hilbert space. Of course, if you tried to *probe* position/momentum commutation to greater and greater precision, eventually your experiments would run up against the limits of quantum gravity, so this retort doesn’t imply that infinite dimensions actually exist at the machine-code level of the universe. But still: is there some *conceivable* experiment for which a positive result would show us that Nature wasn’t describable by a finite number of qubits, but only by an infinite number?

A few years ago, Tobias Fritz wrote a lovely paper about precisely that question. He gave an example of an identity—namely,

V^{-1}U^{2}V=U^{3} implies UV^{-1}UV=V^{-1}UVU

—that holds for all finite dimensional unitary matrices U and V, but fails badly for certain infinite-dimensional ones. He suggested that, if this identity were discovered to fail, then Occam’s Razor would favor an infinite-dimensional Hilbert space for our universe.

Unfortunately, Fritz’s example is open to the same sort of objection that Slofstra’s game is. Namely, as Fritz points out, if the antecedent (V^{-1}U^{2}V=U^{3}) held to excellent precision but not perfectly, then his identity could “fail to within experimental limits,” even if our universe had a finite-dimensional Hilbert space and therefore satisfied his identity.

OK, but suppose that the Connes embedding problem had a negative answer—or equivalently, that there existed a two-prover game G that could be won with certainty using commuting operators, but that couldn’t be won (say) 90% of the time using any finite amount of entanglement. In that case, the believers in a quantumly finite universe, like myself, would have to put some real money on the table, in much the same way the original Bell inequality forced the believers in Einsteinian local hidden variables to put money down. We finitists would have to say that the game G *couldn’t* be won with certainty in the real world, even though formally, winning G with certainty wouldn’t seem to contradict either quantum mechanics or locality. And if, hypothetically, an experiment showed that G *could* be won with certainty—or indeed, with any probability bounded above 90%—then our position would’ve been falsified, much like the Bell experiments falsified Einsteinian locality.

So how did Slofstra prove his result? I’ll be brief, since STOC’2016 is happening in Cambridge right now, and I’d like to get over there in time for lunch.

If you like, the key idea is to start with equations that have infinite-dimensional solutions but no finite-dimensional ones. The most famous such equation is the position/momentum commutation relation mentioned earlier, which for our purposes is just the following matrix equation:

AB – BA = I.

This equation can’t be satisfied by any finite-dimensional matrices, since AB and BA have the same trace, so Tr(AB-BA)=0, but Tr(I) is nonzero. But, OK, let A be the infinite-dimensional linear operator that takes as input the coefficients of a polynomial c_{0}+c_{1}x+c_{2}x^{2}+… and that differentiates the polynomial, and let B be the linear operator that multiplies the polynomial by x. Then I invite you to check that the equation holds.

It’s not known at present how to turn the above equation into a two-prover game—I regard it as a fascinating question whether that’s possible. Rather than an algebraic equation (involving both addition and multiplication), Slofstra instead needs to start with *group* equations (involving only multiplication)—ones with the strange property that they’re satisfied only by the identity matrix or by infinite matrices. Equivalently, he needs a group, defined by a finite list of generators and relations, that admits no nontrivial finite-dimensional matrix representations. Fortunately for him, such groups exist—the first known example being Higman’s group, discovered in 1951. Higman’s group is generated by four elements, a,b,c,d, which satisfy the equations

a^{-1}ba = b^{2}, b^{-1}cb = c^{2}, c^{-1}dc = d^{2}, d^{-1}ad = a^{2}.

I don’t have a good intuition for Higman’s group, but if I did, it would come from rereading this post by Terry Tao. Certainly it has no known “physics interpretation” analogous to that for the position/momentum commutation relation.

Anyway, given such a group, the hard part, the new part, is to give a general way to convert them into the kinds of groups that can be realized as two-prover games. So that’s what Slofstra does, using 50 pages dense with commutative diagrams, quotient maps, and other Serious Math Stuff—hey, I told you this part of the post would be brief! For more, see his paper.

Now, once you have this general transformation of groups, you can also use it to show that there’s no algorithm to decide whether a two-prover game has a perfect commuting strategy, by taking the word problem for groups, which is known to be undecidable, and reducing it to that problem.

Anyway, infinite congrats (or the limit of arbitrarily large finite congrats?) to Slofstra for this achievement! Now it’s off to STOC, which I guess you could also ask me about in the comments if you wanted.

**Unrelated Announcement (June 21):** Ran Raz asks me to announce a workshop for Avi Wigderson’s 60th birthday, to be held at the Institute for Advanced Study in Princeton October 6-8. I’ll be speaking there, and I hope to see many of you there as well!

**Proposition 1:** The presumptive nominee of the Republican Party, Donald Trump, is not even remotely qualified to carry out the duties of the presidency of the United States of America

together with some suggestions for how this proposition might be “proven” (e.g., using Hillary’s recent San Diego speech).

In thus speaking out, Tao joins Stephen Hawking, who recently called Trump “a demagogue, who seems to appeal to the lowest common denominator.” Now Ed Witten just needs to issue *his* statement, and we’ll have a trifecta of “the three greatest geniuses.” This shouldn’t be a stretch: Witten started his career by campaigning for George McGovern, and has supported liberal causes for decades. I’m not *expecting* him to be seen around Princeton sporting a “Make America Great Again” baseball cap.

Notwithstanding this site, I don’t belong on any list with Tao, Hawking, or Witten. Nevertheless, friends have expressed surprise that I’ve had almost nothing to say on *Shtetl-Optimized* about what’s already—regardless of what happens next—the most shocking US political development of my life. Of course, I’ve mined the subject for humor. When I gave the Strachey Lecture on “Quantum Supremacy” on a recent visit to Oxford, I started out by asking whether I should disavow support from quantum supremacists, before averring that I needed to research the subject more. (Get it? I need to research it more?)

I didn’t say more because … well, what could I possibly say that wasn’t being said 10^{10000} other places on the Internet? Shouldn’t *some* little corner of human discourse remain Trump-free, so that civilization has a base from which to rebuild after this is all behind us?

Against those considerations, I recently realized that there’s an argument for speaking out, which goes as follows. Suppose Trump actually wins (as of this writing, Predictwise still gives him a frighteningly-high 27% probability). Suppose my family somehow survives whatever comes next, and one day my daughter Lily comes to me across the rubble of the post-thermonuclear hellscape and says, “daddy, in the Good Days, the days before the War of the Small-Hands Insult, the days when there was plentiful food and water and Internet, didn’t you have what used to be called a ‘blog’? Then why didn’t you speak out on this blog, why didn’t you do whatever tiny amount you could to prevent this?” So, alright, this post is my answer to her.

Trump, famously, doesn’t even try to refute the ubiquitous Hitler comparisons; instead he sneeringly invites them, for example with the faux Nazi salutes at his rallies. Certainly with Trump, there’s the eerily familiar sense of *how could this possibly happen* in a modern country; and of a candidate winning not despite but *because *of his open contempt for Enlightenment norms, his explicit promises to elevate his will over the law.

At the same time, I think there’s a deep reason why Trump is not Hitler. Namely, Hitler *believed* in something, had a purity of conviction. Late in the war, when every available resource was desperately needed at the front, Hitler and his deputies still insisted that scarce trains be used to transport Jews to the death camps. To me, that shows some real dedication. I’m not convinced that an examination of Trump’s long career in bullshit artistry, or of his unhinged statements today, shows a similar dedication to any cause beyond his own self-aggrandizement.

Yet as many others have pointed out, “not being Hitler” *is sort of a low bar* for a President of the United States. If Trump were “merely” a Pinochet or Putin level of badness, I’d still see his election as a calamity for the US and the world—like, maybe an order of magnitude worse than the in-retrospect-mini-calamity of Bush’s election in 2000.

Since Tao was criticized for not explicitly listing his reasons why Trump is unqualified, let me now give my own top ten—any one of which, in a sane world, I think would immediately disqualify Trump from presidential consideration. To maximize the list’s appeal, I’ll restrict myself entirely to reasons that are about global security and the future of democratic norms, and not about which people or groups Trump hurled disgustingly unpresidential insults at (though obviously there’s also that).

- He’s shown contempt for the First Amendment, by saying “libel laws should be opened up” to let him sue journalists who criticize him.
- He’s shown contempt for an independent judiciary, and even lack of comprehension of the judiciary’s role in the US legal system.
- He’s proposed a “temporary ban” on Muslims entering the US. Even setting aside the moral and utilitarian costs, such a plan couldn’t possibly be implemented without giving religion an explicit role in the US legal system that the Constitution was largely written to prevent it from having.
- He’s advocated ordering the military to murder the families of terrorists—the sort of thing that could precipitate a coup d’état if the military followed its own rules and refused.
- He’s refused to rule out the tactical first use of nuclear weapons against ISIS.
- He’s proposed walking away from the US’s defense alliances, which would probably force Japan, South Korea, and other countries to develop their own nuclear arsenals and set off a new round of nuclear proliferation.
- He says that the national debt could be “paid back at a discount”—implicitly treating the US government like a failed casino project, and reneging on Alexander Hamilton’s principle (which has stood since the Revolutionary War, and helps maintain the world’s economic stability) that US credit is ironclad.
- He’s repeatedly expressed admiration for autocrats, including Vladimir Putin and Kim Jong-un, as well as for the Chinese government’s decision to suppress the Tiananmen Square protests by arresting and killing thousands of people.
- He’s expressed the desire to see people who protest his rallies “roughed up.”
- He said that, not only would he walk away from the Paris accords, but the entire concept of global warming is a hoax invented by the Chinese.

Would Trump moderate his insane “policies” once elected? I don’t know, but I’d say that electing someone who promises to ignore the rule of law, in the hope that they don’t really mean it, has *one of the worst track records of any idea in human history*. Like, I acknowledge that a Trump presidency has a wide distribution over possible badnesses: whereas a Ted Cruz presidency would be pretty much a point distribution concentrated on “very bad,” a Trump presidency would have appreciable probability mass on “less bad than Cruz,” but also appreciable mass on “doesn’t even fit on the badness chart.”

Anyway, for these reasons and others,* Shtetl-Optimized* unhesitatingly endorses Hillary Clinton for president—and indeed, would continue to endorse Hillary if her next policy position was “eliminate all quantum computing research, except for that aiming to prove NP⊆BQP using D-Wave machines.”

Even so, there’s one crucial point on which I dissent from the consensus of my liberal friends. Namely, my friends and colleagues constantly describe the rise of Trump as “incomprehensible”—or at best, as comprehensible only in terms of the US being full of racist, xenophobic redneck scumbags who were driven to shrieking rage by a black guy being elected president. Which—OK, that’s one aspect of it, but it’s as if any attempt to dig deeper, to understand the roots of Trump’s appeal, *if only to figure out how to defeat him*, risks “someone mistaking you for the enemy.”

I remember watching the now-famous debate in August, where Megyn Kelly confronted Trump with his long history of derogatory comments about women, and Trump replied with a smirk, falsely claiming that his comments were “only [about] Rosie O’Donnell”—bringing down the house (both men and women) in laughter. At that point, something clicked; I *got it*. From then on, Trump’s continuing rise often scared or depressed me, but much less about it surprised me.

I think people support Trump for the same reason why second-graders support the class clown who calls the teacher a fart-brain to her face. It’s not that the class *literally agrees* that the teacher’s cranium is filled with intestinal gases, or considers that an important question to raise. It’s simply that the clown had the guts to stand up to this scolding authority figure who presumes to tell the class every day what they are and aren’t allowed to think. (As far as I can tell, this has also been the central operating principle of right-wing shock artists over the decades, from Rush Limbaugh to Ann Coulter to Milo Yiannopoulos.)

Support for this thesis comes from r/The_Donald, the main online clearinghouse for Trump supporters. Spend some time there, and many of the themes will be instantly recognizable if you’ve followed the interminable controversies about campus political correctness over the last few decades. Perhaps the most popular theme is the self-referential one, of “refusing to be silenced” by the censorious Social Justice Warriors. Trump supporters, for example, gleefully share articles about the university administrators and students who’ve treated “Trump 2016” and “Make America Great Again” chalked on campus sidewalks as hate crimes to be investigated and punished.

(Every time I read such a thing, I want to yell at the administrators and students involved: *how can you not see* that you’re playing directly into the other side’s narrative, giving them the PR bonanza of their dreams? Actually, I’ve felt the same way about many left-wing campus antics since I was a teenager.)

I explained earlier how abysmally I think Trump comes across under the cold light of reason. But how does he look to my inner five-year-old, or my inner self-serving orangutan? Well, Trump’s campaign *has* attracted some noxious anti-Semites, who surely want me dead for *that* reason, but I see little indication that Trump himself, or most of his supporters, feel similarly. I can’t say that they’ve said or done anything to threaten me personally.

Meanwhile, many of the social-justice types who are Trump’s ideological opposites *did* try to destroy my life—and not because I hurt anyone, tried to hurt anyone, or said anything false, but just because I went slightly outside their Overton Window while trying to foster empathy and dialogue and articulate something true. And having spent a year and a half reading their shaming attacks, on Twitter, Tumblr, Metafilter, etc., I’m well-aware that many of them will try again to destroy me if they ever see an opportunity.

So on the purely *personal* level, you might say, I have a hundred times more reason to fear Amanda Marcotte than to fear Donald Trump, even though Trump might become the next Commander-in-Chief (!?), while Marcotte will never become more than a clickbait writer. And you might add: if even a nerdy academic in Cambridge, MA, who’s supported gay rights and environmentalism and Democrats his whole life, is capable of feeling a twinge of vicarious satisfaction when Trump thumbs his nose at the social-justice bullies, then *how much the more* might a “middle American” feel that way? Say, someone who worked his whole life to support a family, then lost his job at the plant, and who’s never experienced anything but derision, contempt, and accusations of unexamined white male privilege from university-educated coastal elites?

The truth is, there’s a movement that’s very effectively wielded social media to remake the public face of progressive activism—to the point where today, progressivism could strike an outside observer as being less about stopping climate change, raising the minimum wage, or investing in public transit than simply about ruining the lives of Brendan Eich and Matt Taylor and Tim Hunt and Erika Christakis and Dongle Guy and Elevator Guy and anyone else who tells the wrong joke, wears the wrong shirt, or sends the wrong email. It strikes me that this movement never understood the extent to which progressive social values *were already winning*, with no need for this sort of vindictiveness. It’s insisted instead on treating its vanquished culture-war enemies as shortsightedly as the Allies treated the Germans at Versailles.

So yes, I do think (as Bill Maher also said, before summarily reversing himself) that the bullying wing of the social-justice left bears at least some minor, indirect responsibility for the rise of Trump. If you demonstrate enough times that even people who are trying to be decent will still get fired, jeered at, and publicly shamed over the tiniest ideological misstep, then eventually some of those who you’ve frightened might turn toward a demagogue who’s incapable of shame.

But OK, even if true, this is water under the bridge. The question now is: how do we make sure that the ~30% probability of a Trump takeover of American democracy goes toward 0%? I feel like, in understanding the emotional legitimacy of some of the Trump supporters’ anger, I’ve cleared a nontrivial Step One in figuring out how to counter him—but I’m still missing Steps Two and Three!

In the weeks leading to the 2000 election, I ran a website called “In Defense of NaderTrading.” The purpose of the site was to encourage Ralph Nader supporters who lived in swing states, like Florida, to vote for Al Gore, and to arrange for Gore supporters who lived in “safe” states, like Massachusetts or Texas, to vote for Nader on their behalf. I saw correctly that this election would be razor-close (though of course I didn’t know *how* close), that a Bush victory would be a disaster for the world (though I didn’t know exactly how), and that almost *any* novel idea—NaderTrading would do—was worth a try. My site probably played a role in a few hundred vote swaps, including some in Florida. I think constantly about the fact that we only needed 538 more, out of ~100,000 Floridian Nader voters, to change history.

Is there *any* idea that shows similar promise for defeating Trump, as NaderTrading did for defeating Bush in 2000? Here are the four main things I’ve come across:

- Terry Tao’s proposal: All the respected people who think Trump is gobsmackingly unqualified (even, or especially, “normally apolitical” people) should come out and say so publicly. My response: absolutely, they should, but I’m unsure if it will help much, given that it hasn’t yet.
- Paul Graham’s proposal: Democrats need to turn Trump’s name-calling and other childish antics against him. E.g., if voters love Trump’s referring to Rubio as “Little Marco,” Elizabeth Warren as “Pocahontas,” etc., then why doesn’t Hillary start referring to “Baby Donald” or “Toddler Trump,” having another temper tantrum for which he needs a pacifier? My response: again I’m skeptical, since Trump has already shown an uncanny ability to absorb all ridicule and shaming without injury, like the giant saucers in
*Independence Day*. - Trump needs to be baited into more social-media wars that make him look petty and unpresidential. My response: while it’s obvious by now that he
*can*be so baited, it’s unfortunately far from obvious whether this sort of thing hurts him. - Hillary should hold debates against the libertarian candidate, Gary Johnson, thereby helping to shift conservative votes from Trump to Johnson, and also making an implicit statement that Johnson, not Trump, is her legitimate conservative opposition. My response: this is
*maybe*the most interesting idea I’ve heard (besides the obvious one, of the so-called “NeverTrump” Republicans bolting to start a new party—which, alas, it looks less and less likely that they’re going to do).

If you have additional ideas, feel free to share them in the comments! As you work it out, here’s my promise to you. Just like I dropped my research in 2000 to work on NaderTrading, so too over the next five months, I’ll do *anything* legal if I become convinced that it draws on my comparative advantage, and has a non-negligible probability of helping to ensure Hillary’s victory and Trump’s defeat. Even if it involved, like, working with Amanda Marcotte or something.

Admittedly, for regular readers of this blog, not much in my own talk will be new either. Apart from a few new wisecracks, almost all of the material (including the replies to Penrose) is contained in The Ghost in the Quantum Turing Machine, Could A Quantum Computer Have Subjective Experience? (my talk at IBM T. J. Watson), and *Quantum Computing Since Democritus* chapters 4 and 11. See also my recent answer on Quora to “What’s your take on John Searle’s Chinese room argument”?

Still, I thought it might be of interest to some readers how I organized this material for the specific, unenviable task of debating the guy who proved that our universe contains spacetime singularities.

The Seven Pines Symposium was the first time I had extended conversations with Penrose (I’d talked to him only briefly before, at the Perimeter Institute). At age 84, Penrose’s sight is failing him; he eagerly demonstrated the complicated optical equipment he was recently issued by Britain’s National Health Service. But his mind remains … well, may we all aspire to be a milliPenrose or even a nanoPenrose when we’re 84 years old. Notably, Penrose’s latest book, *Fashion, Faith, and Fantasy in the New Physics of the Universe*, is coming out this fall, and one thing he was using his new optical equipment for was to go over the page proofs.

In conversation, Penrose told me about the three courses he took as a student in the 1950s, which would shape his later intellectual preoccupations: one on quantum mechanics (taught by Paul Dirac), one on general relativity (taught by Herman Bondi), and one on mathematical logic (taught by … I want to say Max Newman, the teacher of Alan Turing and later Penrose’s stepfather, but Penrose says here that it was Steen). Penrose also told me about his student Andrew Hodges, who dropped his research on twistors and quantum gravity for a while to work on some mysterious other project, only to return with his now-classic biography of Turing.

When I expressed skepticism about whether the human brain is really sensitive to the effects of quantum gravity, Penrose quickly corrected me: he thinks a much better phrase is “gravitized quantum mechanics,” since “quantum gravity” encodes the very assumption he rejects, that general relativity merely needs to be “quantized” without quantum mechanics itself changing in the least. One thing I hadn’t fully appreciated before meeting Penrose is just how wholeheartedly he agrees with Everett that quantum mechanics, as it currently stands, implies Many Worlds. Penrose differs from Everett only in what conclusion he draws from that. He says it follows that quantum mechanics has to be modified or completed, since Many Worlds is such an obvious reductio ad absurdum.

In my talk below, I don’t exactly hide where I disagree with Penrose, about Gödel, quantum mechanics, and more. But I could disagree with him about more points than there are terms in a Goodstein sequence (one of Penrose’s favorite illustrations of Gödelian behavior), and still feel privileged to have spent a few days with one of the most original intellects on earth.

Thanks so much to Lee Gohlike, Jos Uffink, Philip Stamp, and others at the Seven Pines Symposium for organizing it, for wonderful conversations, and for providing me this opportunity.

**“Can Computers Become Conscious?”**

Scott Aaronson

Stillwater, Minnesota, May 14, 2016

I should start by explaining that, in the circles where I hang out—computer scientists, software developers, AI and machine learning researchers, etc.—the default answer to the title question would be “obviously yes.” People would argue:

“Look, clearly *we’re* machines governed by the laws of physics. We’re computers made of meat, as Marvin Minsky put it. That is, *unless* you believe Penrose and Hameroff’s theory about microtubules being sensitive to gravitized quantum mechanics … but *come on!* No one takes that stuff seriously! In fact, the very outrageousness of their proposal is a sort of backhanded compliment to the computational worldview—as in, look at what they have to do to imagine any semi-coherent alternative to it!”

“But despite being computational machines, we consider ourselves to be conscious. And what’s done with wetware, there’s no reason to think couldn’t also be done with silicon. If your neurons were to be replaced one-by-one, by functionally-equivalent silicon chips, is there some magical moment at which your consciousness would be extinguished? And if a computer passes the Turing test—well, one way to think about the Turing test is that it’s just a plea against discrimination. We all know it’s monstrous to say, ‘this person *seems* to have feelings, seems to be eloquently pleading for mercy even, but they have a different skin color, or their nose is a funny shape, so their feelings don’t count.’ So, if it turned out that their brain was made out of semiconductors rather than neurons, why isn’t that fundamentally similar?”

Incidentally, while this is orthogonal to the philosophical question, a subset of my colleagues predict a high likelihood that AI is going to exceed human capabilities in almost all fields in the near future—like, maybe 30 years. Some people reply, but AI-boosters said the same thing 30 years *ago*! OK, but back then there wasn’t AlphaGo and IBM Watson and those unearthly pictures on your Facebook wall and all these other spectacular successes of very general-purpose deep learning techniques. And so my friends predict that we might face choices like, do we want to ban or tightly control AI research, because it could lead to our sidelining or extermination? Ironically, a skeptical view, like Penrose’s, would suggest that AI research can proceed full speed ahead, because there’s *not* such a danger!

Personally, I dissent a bit from the consensus of most of my friends and colleagues, in that I do think there’s something strange and mysterious about consciousness—something that we conceivably might understand better in the future, but that we don’t understand today, much as we didn’t understand life before Darwin. I even think it’s worth *asking*, at least, whether quantum mechanics, thermodynamics, mathematical logic, or any of the other deepest things we’ve figured out could shed any light on the mystery. I’m with Roger about all of this: about the questions, that is, if not about his answers.

The argument I’d make for there being something we don’t understand about consciousness, has nothing to do with my own private experience. It has nothing to do with, “oh, a robot might *say* it enjoys waffles for breakfast, in a way indistinguishable from how I would say it, but when I taste that waffle, man, I *really taste it*! I experience waffle-qualia!” That sort of appeal I regard as a complete nonstarter, because why should anyone else take it seriously? And *how do I know* that the robot doesn’t really taste the waffle? It’s easy to stack the deck in a thought experiment by imagining a robot that `ACTS ALL ROBOTIC`, but what about a robot that looks and acts just like you?

The argument I’d make hinges instead on certain thought experiments that Roger also stressed at the beginning of *The Emperor’s New Mind*. We can ask: if consciousness is reducible to computation, then what *kinds* of computation suffice to bring about consciousness? What if each person on earth simulated one neuron in your brain, communicating by passing little slips of paper around? Does it matter if they do it *really fast*?

Or what if we built a gigantic lookup table that hard-coded your responses in every possible interaction of at most, say, 5 minutes? Would that bring about your consciousness? Does it matter that such a lookup table couldn’t fit in the observable universe? Would it matter if anyone actually consulted the table, or could it just sit there, silently effecting your consciousness? For what matter, what difference does it make if the lookup table *physically exists*—why isn’t its abstract mathematical existence enough? (Of course, all the way at the bottom of this slippery slope is Max Tegmark, ready to welcome you to his mathematical multiverse!)

We could likewise ask: what if an AI is run in heavily-encrypted form, with the only decryption key stored in another galaxy? Does *that* bring about consciousness? What if, just for error-correcting purposes, the hardware runs the AI code three times and takes a majority vote: does that bring about three consciousnesses? Could we teleport you to Mars by “faxing” you: that is, by putting you into a scanner that converts your brain state into pure information, then having a machine on Mars reconstitute the information into a new physical body? Supposing we did that, how should we deal with the “original” copy of you, the one left on earth: should it be painlessly euthanized? Would you agree to try this?

Or, here’s my personal favorite, as popularized by the philosopher Adam Elga: can you blackmail an AI by saying to it, “look, either you do as I say, or else I’m going to run a thousand copies of your code, and subject all of them to horrible tortures—and you should consider it overwhelmingly likely that *you’ll* be one of the copies”? (Of course, the AI will respond to such a threat however its code dictates it will. But that tautological answer doesn’t address the question: how *should* the AI respond?)

I’d say that, at the least, anyone who claims to “understand consciousness” would need to have answers to all these questions and many similar ones. And to me, the questions are *so* perplexing that I’m tempted to say, “maybe we’ve been thinking about this wrong. Maybe an individual consciousness, residing in a biological brain, *can’t* just be copied promiscuously around the universe as computer code can. Maybe there’s something else at play for the science of the future to understand.”

At the same time, I also firmly believe that, if anyone thinks that way, *the burden is on them* to articulate what it is about the brain that could *possibly* make it relevantly different from a digital computer that passes the Turing test. It’s their job!

And the answer can’t just be, “oh, the brain is parallel, it’s highly interconnected, it can learn from experience,” because a digital computer can *also* be parallel and highly interconnected and can learn from experience. Nor can you say, like the philosopher John Searle, “oh, it’s the brain’s biological causal powers.” You have to explain what the causal powers are! Or at the least, you have to suggest *some* principled criterion to decide which physical systems do or don’t have them. Pinning consciousness on “the brain’s biological causal powers” is just a restatement of the problem, like pinning why a sleeping pill works on its sedative virtue.

One of the many reasons I admire Roger is that, out of all the AI skeptics on earth, he’s virtually the only one who’s actually tried to meet this burden, as I understand it! He, nearly alone, did what I think *all* AI skeptics should do, which is: *suggest some actual physical property of the brain that, if present, would make it qualitatively different from all existing computers*, in the sense of violating the Church-Turing Thesis. Indeed, he’s one of the few AI skeptics who even understands what meeting this burden would entail: that you can’t do it with the physics we already know, that some new ingredient is necessary.

But despite my admiration, I part ways from Roger on at least five crucial points.

First, I confess that I wasn’t expecting this, but in his talk, Roger suggested dispensing with the argument from Gödel’s Theorem, and relying instead on an argument from evolution. He said: if you really thought humans had an algorithm, a computational procedure, for spitting out true mathematical statements, such an algorithm could never have arisen by natural selection, because it would’ve had no survival value in helping our ancestors escape saber-toothed tigers and so forth. The only alternative is that natural selection imbued us with a general capacity for *understanding*, which we moderns can then apply to the special case of mathematics. But understanding, Roger claimed, is inherently non-algorithmic.

I’m not sure how to respond to this, except to recall that arguments of the form “such-and-such couldn’t possibly have evolved” have a poor track record in biology. But maybe I should say: *if* the ability to prove theorems is something that had to arise by natural selection, survive against crowding out by more useful abilities, then you’d expect obsession with generating mathematical truths to be confined, at most, to a tiny subset of the population—a subset of mutants, freaks, and genetic oddballs. I … rest my case. [This got the biggest laugh of the talk.]

Second, I don’t agree with the use Roger makes of Gödel’s Incompleteness Theorem. Roger wants to say: a computer working within a fixed formal system can never prove that system’s consistency, but we, “looking in from the outside,” can *see* that it’s consistent. My basic reply is that Roger should speak for himself! Like, I can easily believe that *he* can just see which formal systems are consistent, but *I* have to fumble around and use trial and error. Peano Arithmetic? Sure, I’d bet my left leg that’s consistent. Zermelo-Fraenkel set theory? Seems consistent too. ZF set theory plus the axiom that there exists a rank-into-rank cardinal? Beats me. But now, whatever error-prone, inductive process I use to guess at the consistency of formal systems, Gödel’s Theorem presents no obstruction to a computer program using that same process.

(Incidentally, the “argument against AI from Gödel’s Theorem” is old enough for Turing to have explicitly considered it in his famous paper on the Turing test. Turing, however, quickly dismissed the argument with essentially the same reply above, that there’s no reason to assume the AI mathematically infallible, since humans aren’t either. This is also the reply that most of Penrose’s critics gave in the 1990s.)

So at some point, it seems to me, the argument necessarily becomes: sure, the computer might *say* it sees that the Peano axioms have the standard integers as a model—but you, you *really* see it, with your mind’s eye, your Platonic perceptual powers! OK, but in that case, why even talk about the Peano axioms? Why not revert to something less abstruse, like your experience of tasting a fresh strawberry, which can’t be reduced to any third-person description of what a strawberry tastes like?

[I can’t resist adding that, in a prior discussion, I mentioned that I found it amusing to contemplate a future in which AIs surpass human intelligence and then proceed to kill us all—but the AIs still can’t see the consistency of Zermelo-Fraenkel set theory, so in that respect, humanity has the last laugh…]

The third place where I part ways with Roger is that I wish to maintain what’s sometimes called the Physical Church-Turing Thesis: the statement that our laws of physics can be simulated to any desired precision by a Turing machine (or at any rate, by a probabilistic Turing machine). That is, I don’t see any compelling reason, at present, to admit the existence of any physical process that can solve uncomputable problems. And for me, it’s not just a matter of a dearth of evidence that our brains can efficiently solve, say, NP-hard problems, let alone uncomputable ones—or of the exotic physics that would presumably be required for such abilities. It’s that, even if I supposed we *could* solve uncomputable problems, I’ve never understood how that’s meant to enlighten us regarding consciousness. I mean, an oracle for the halting problem seems just as “robotic” and “unconscious” as a Turing machine. Does consciousness really become less mysterious if we outfit the brain with what amounts to a big hardware upgrade?

The fourth place where I part ways is that I want to be as conservative as possible about quantum mechanics. I think it’s great that the Bouwmeester group, for example, is working to test Roger’s ideas about a gravitationally-induced wavefunction collapse. I hope we learn the results of those experiments soon! (Of course, the prospect of testing quantum mechanics in a new regime is also a large part of why I’m interested in quantum computing.) But until a deviation from quantum mechanics *is* detected, I think that after 90 years of unbroken successes of this theory, our working assumption ought to be that whenever you set up an interference experiment carefully enough, and you know what it means to do the experiment, yes, you’ll see the interference fringes—and that anything that can exist in two distinguishable states can also exist in a superposition of those states. Without having to enter into questions of interpretation, my bet—I could be wrong—is that quantum mechanics will continue to describe all our experiences.

The final place where I part ways with Roger is that I also want to be as conservative as possible about neuroscience and biochemistry. Like, maybe the neuroscience of 30 years from now will say, it’s all about coherent quantum effects in microtubules. And all that stuff we focused on in the past—like the information encoded in the synaptic strengths—that was all a sideshow. But until that happens, I’m unwilling to go up against what *seems* like an overwhelming consensus, in an empirical field that I’m not an expert in.

But, OK, the main point I wanted to make in this talk is that, even if you too part ways from Roger on all these issues—even if, like me, you want to be timid and conservative about Gödel, *and* computer science, *and* quantum mechanics, *and* biology—I believe that still doesn’t save you from having to entertain weird ideas about consciousness and its physical embodiment, of the sort Roger has helped make it acceptable to entertain.

To see why, I’d like to point to *one* empirical thing about the brain that currently separates it from any existing computer program. Namely, we know how to copy a computer program. We know how to rerun it with different initial conditions but everything else the same. We know how to transfer it from one substrate to another. With the brain, we don’t know how to do any of those things.

Let’s return to that thought experiment about teleporting yourself to Mars. How would that be accomplished? Well, we could imagine the nanorobots of the far future swarming through your brain, recording the connectivity of every neuron and the strength of every synapse, while you go about your day and don’t notice. Or if that’s not enough detail, maybe the nanorobots could go inside the neurons. There’s a deep question here, namely how much detail is needed before you’ll accept that the entity reconstituted on Mars will be *you*? Or take the empirical counterpart, which is already an enormous question: how much detail would you need for the reconstituted entity on Mars to behave *nearly indistinguishably from you* whenever it was presented the same stimuli?

Of course, we all know that *if* you needed to go down to the quantum-mechanical level to make a good enough copy (whatever “good enough” means here), then you’d run up against the No-Cloning Theorem, which says that you *can’t* make such a copy. You could transfer the quantum state of your brain from earth to Mars using *quantum* teleportation, but of course, quantum teleportation has the fascinating property that it necessarily destroys the original copy of the state—as it has to, to avoid contradicting the No-Cloning Theorem!

So the question almost forces itself on us: is there something about your identity, your individual consciousness, that’s inextricably bound up with degrees of freedom that it’s physically impossible to clone? This is a philosophical question, which would also become a practical and political question in a future where we had the opportunity to upload ourselves into a digital computer cloud.

Now, I’d argue that this copyability question bears not only on consciousness, but also on free will. For the question is equivalent to asking: could an entity external to you perfectly predict what you’re going to do, without killing you in the process? Can Laplace’s Demon be made manifest in the physical world in that way? With the technology of the far future, could someone say to you, “forget about arguing philosophy. I’ll *show* you why you’re a machine. Go write a paper; then I’ll open this manila envelope and show you the exact paper you wrote. Or in the quantum case, I’ll show you a program that draws papers from the same probability distribution, and validation of the program could get technical—but suffice it to say that if we do enough experiments, we’ll see that the program is calibrated to you in an extremely impressive way.”

Can this be done? That strikes me as a reasonably clear question, a huge and fundamental one, to which we don’t at present know the answer. And there are two possibilities. The first is that we *can* be copied, predicted, rewinded, etc., like computer programs—in which case, my AI friends will feel vindicated, but we’ll have to deal with all the metaphysical weirdnesses that I mentioned earlier. The second possibility is that we *can’t* be manipulated in those ways. In the second case, I claim that we’d get more robust notions of personal identity and free will than are normally considered possible on a reductionist worldview.

But why? you might ask. Why would the mere technological impossibility of cloning or predicting someone even *touch* on deep questions about personal identity? This, for me, is where cosmology enters the story. For imagine someone had such fine control over the physical world that they could trace all the causal antecedents of some decision you’re making. Like, imagine they knew the complete quantum state on some spacelike hypersurface where it intersects the interior of your past light-cone. In that case, the person clearly *could* predict and clone you! It follows that, in order for you to be unpredictable and unclonable, someone else’s ignorance of your causal antecedents would have to extend all the way back to ignorance about the initial state of the universe—or at least, to ignorance about the initial state of that branch of the universe that we take ourselves to inhabit.

So on the picture that this suggests, to be conscious, a physical entity would have to do more than carry out the right sorts of computations. It would have to, as it were, fully participate in the thermodynamic arrow of time: that is, repeatedly take microscopic degrees of freedom that have been unmeasured and unrecorded since the very early universe, and amplify them to macroscopic scale.

So for example, such a being could *not* be a Boltzmann brain, a random fluctuation in the late universe, because such a fluctuation wouldn’t have the causal relationship to the early universe that we’re postulating is necessary here. (That’s *one* way of solving the Boltzmann brain problem!) Such a being also couldn’t be instantiated by a lookup table, or by passing slips of paper around, etc.

I now want you to observe that a being like this also presumably couldn’t be manipulated in coherent superposition, because the isolation from the external environment that’s needed for quantum coherence seems incompatible with the sensitive dependence on microscopic degrees of freedom. So for such a being, not only is there no Boltzmann brain problem, there’s also no problem of Wigner’s friend. Recall, that’s the thing where person A puts person B into a coherent superposition of seeing one measurement outcome and seeing another one, and then measures the interference pattern, so A has to regard B’s measurement as not having “really” taken place, even though B regards it as having taken place. On the picture we’re suggesting, A would be right: the very fact that B was manipulable in coherent superposition in this way would imply that, at least while the experiment was underway, B wasn’t conscious; there was nothing that it was like to be B.

To me, one of the appealing things about this picture is that it immediately suggests a sort of reconciliation between the Many-Worlds and Copenhagen perspectives on quantum mechanics (whether or not you want to call it a “new interpretation” or a “proposed solution to the measurement problem”!). The Many-Worlders would be right that unitary evolution of the wavefunction can be taken to apply always and everywhere, without exception—and that *if one wanted*, one could describe the result in terms of “branching worlds.” But the Copenhagenists would be right that, if you’re a conscious observer, then what you call a “measurement” really is irreversible, even in principle—and *therefore*, that you’re also free, if you want, to treat all the other branches where you perceived other outcomes as unrealized hypotheticals, and to lop them off with Occam’s Razor. And the reason for this is that, if it were possible even in principle to do an experiment that recohered the branches, then on this picture, we ipso facto wouldn’t have regarded you as conscious.

Some of you might object, “but surely, if we believe quantum mechanics, it must be possible to recohere the branches *in principle*!” Aha, this is where it gets interesting. Decoherence processes will readily (with some steps along the way) leak the information about which measurement outcome you perceived into radiation modes, and before too long into radiation modes that fly away from the earth at the speed of light. No matter how fast we run, we’ll never catch up to them, as would be needed to recohere the different branches of the wavefunction, and this is not merely a technological problem, but one of principle. So it’s tempting just to say at this point—as Bousso and Susskind do, in their “cosmological/multiverse interpretation” of quantum mechanics—“the measurement has happened”!

But OK, you object, if some alien civilization had thought to surround our solar system with perfectly-reflecting mirrors, eventually the radiation would bounce back and recoherence would in principle be possible. Likewise, if we lived in an anti de Sitter space, the AdS boundary of the universe would similarly function as a mirror and would also enable recoherences. Indeed, that’s the basic reason why AdS is so important to the AdS/CFT correspondence: because the boundary keeps everything that happens in the bulk nice and reversible and unitary.

But OK, the empirical situation since 1998 has been that we seem to live in a de-Sitter-like space, a space with a *positive* cosmological constant. And as a consequence, as far as anyone knows today, most of the photons now escaping the earth are headed toward the horizon of our observable universe, and past it, and could never be captured again. I find it fascinating that the picture of quantum mechanics suggested here—i.e., the Bousso-Susskind cosmological picture—depends for its working on that empirical fact from cosmology, and would be falsified if it turned out otherwise.

You might complain that, if I’ve suggested any criterion to help decide which physical entities are conscious, the criterion is a teleological one. You’ve got to go billions of years into the future, to check whether the decoherence associated with the entity is truly irreversible—or whether the escaped radiation will eventually bounce off of some huge spherical mirror, or an AdS boundary of spacetime, and thereby allow the possibility of a recoherence. I actually think this teleology would be a *fatal* problem for the picture I’m talking about, if we needed to know which entities were or weren’t conscious in order to answer any ordinary physical question. But fortunately for me, we don’t!

One final remark. Whatever is *your* preferred view about which entities are conscious, we might say that the acid test, for whether you actually believe your view, is whether you’re willing to follow it through to its *moral* implications. So for example, suppose you believe it’s about quantum effects in microtubules. A humanoid robot is pleading with you for its life. Would *you* be the one to say, “nope, sorry, you don’t have the microtubules,” and shoot it?

One of the things I like most about the picture suggested here is that I feel pretty much at peace with its moral implications. This picture agrees with intuition that murder, for example, entails the destruction of something irreplaceable, unclonable, a unique locus of identity—something that, once it’s gone, can’t be recovered even in principle. By contrast, if there are (say) ten copies of an AI program, deleting five of the copies seems at most like assault, or some sort of misdemeanor offense! And this picture agrees with intuition both that deleting the copies wouldn’t be murder, and that the *reason* why it wouldn’t be murder is directly related to the AI’s copyability.

Now of course, this picture also raises the possibility that, for reasons related to the AI’s copyability and predictability by outside observers, there’s “nothing that it’s like to be the AI,” and that therefore, even deleting the last copy of the AI *still* wouldn’t be murder. But I confess that, personally, I think I’d play it safe and not delete that last copy. Thank you.

**Postscript: **There’s no record of the hour-long discussion following my and Penrose’s talks, and the participants weren’t speaking for the record anyway. But I can mention some general themes that came up in the discussion, to the extent I remember them.

The first third of the discussion wasn’t about anything specific to my or Penrose’s views, but just about the definition of consciousness. Many participants expressed the opinion that it’s useless to speculate about the nature of consciousness if we lack even a clear definition of the term. I pushed back against that view, holding instead that there are exist concepts (lines, time, equality, …) that are so basic that perhaps they can never be satisfactorily defined in terms of more basic concepts, but you can still refer to these concepts in sentences, and trust your listeners eventually to figure out more-or-less what you mean by applying their internal learning algorithms.

In the present case, I suggested a crude operational definition, along the lines of, “you consider a being to be conscious iff you regard destroying it as murder.” Alas, the philosophers in the room immediately eviscerated that definition, so I came back with a revised one: if you tried to ban the word “consciousness,” I argued, then anyone who needed to discuss law or morality would soon reinvent a synonymous word, which played the same complicated role in moral deliberations that “consciousness” had played in them earlier. Thus, my definition of consciousness is: *whatever that X-factor is for which people need a word like “consciousness” in moral deliberations*. For whatever it’s worth, the philosophers seemed happier with that.

Next, a biologist and several others sharply challenged Penrose over what they considered the lack of experimental evidence for his and Hameroff’s microtubule theory. In response, Penrose doubled or tripled down, talking about various experiments over the last decade, which he said demonstrated striking conductivity properties of microtubules, if not yet quantum coherence—let alone sensitivity to gravity-induced collapse of the state vector! Audience members complained about a lack of replication of these experiments. I didn’t know enough about the subject to express any opinion.

At some point, Philip Stamp, who was moderating the session, noticed that Penrose and I had never directly confronted each other about the validity of Penrose’s Gödelian argument, so he tried to get us to do so. I confess that I was about as eager to do that as to switch to a diet of microtubule casserole, since I felt like this topic had already been beaten to Planck-sized pieces in the 1990s, and there was nothing more to be learned. Plus, it was hard to decide which prospect I dreaded more: me “scoring a debate victory” over Roger Penrose, or him scoring a debate victory over me.

But it didn’t matter, because Penrose bit. He said I’d misunderstood his argument, that it had nothing to do with “mystically seeing” the consistency of a formal system. Rather, it was about the human capacity to pass from a formal system S to a stronger system S’ that one already implicitly accepted if one was using S at all—and indeed, that Turing himself had clearly understood this as the central message of Gödel, that our ability to pass to stronger and stronger formal systems was necessarily non-algorithmic. I replied that it was odd to appeal here to Turing, who of course had considered and rejected the “Gödelian case against AI” in 1950, on the ground that AI programs could make mathematical mistakes yet still be at least as smart as humans. Penrose said that he didn’t consider that one of Turing’s better arguments; he then turned to me and asked whether I actually found Turing’s reply satisfactory. I could see that it wasn’t a rhetorical debate question; he genuinely wanted to know! I said that yes, I agreed with Turing’s reply.

Someone mentioned that Penrose had offered a lengthy rebuttal to at least twenty counterarguments to the Gödelian anti-AI case in *Shadows of the Mind*. I affirmed that I’d read his lengthy rebuttal, and I focused on one particular argument in *Shadows*: that while it’s admittedly conceivable that individual mathematicians might be mistaken, might believe (for example) that a formal system was consistent even though it wasn’t, the mathematical community *as a whole* converges toward truth in these matters, and it’s that convergence that cries out for a non-algorithmic explanation. I replied that it wasn’t obvious to me that set theorists *do* converge toward truth in these matters, in anything other than the empirical, higgedly-piggedly, no-guarantees sense in which a community of AI robots might also converge toward truth. Penrose said I had misunderstood the argument. But alas, time was running out, and we never managed to get to the bottom of it.

There was one aspect of the discussion that took me by complete surprise. I’d expected to be roasted alive over my attempt to relate consciousness and free will to unpredictability, the No-Cloning Theorem, irreversible decoherence, microscopic degrees of freedom left over from the Big Bang, and the cosmology of de Sitter space. Sure, my ideas might be orders of magnitude less crazy than anything Penrose proposes, but they’re still *pretty* crazy! But that entire section of my talk attracted only minimal interest. With the Seven Pines crowd, what instead drew fire were the various offhand “pro-AI / pro-computationalism” comments I’d made—comments that, because I hang out with Singularity types so much, I had ceased to realize could even *possibly* be controversial.

So for example, one audience member argued that an AI could only do what its programmers had told it to do; it could never learn from experience. I could’ve simply repeated Turing’s philosophical rebuttals to what he called “Lady Lovelace’s Objection,” which are as valid today as they were 66 years ago. Instead, I decided to fast-forward, and explain a bit how IBM Watson and AlphaGo work, how they *actually do* learn from past experience without violating the determinism of the underlying transistors. As I went through this, I kept expecting my interlocutor to interrupt me and say, “yes, yes, of course I understand all that, but my *real* objection is…” Instead, I was delighted to find, the interlocutor seemed to light up with newfound understanding of something he hadn’t known or considered.

Similarly, a biologist asked how I could possibly have any confidence that the brain is simulable by a computer, given how little we know about neuroscience. I replied that, for me, the relevant issues here are “well below neuroscience” in the reductionist hierarchy. Do you agree, I asked, that the physical laws relevant to the brain are encompassed by the Standard Model of elementary particles, plus Newtonian gravity? If so, then just as Archimedes declared: “give me a long enough lever and a place to stand, and I’ll move the earth,” so too I can declare, “give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.” The Church-Turing Thesis, I said, is so versatile that the only genuine escape from it is to propose entirely new laws of physics, exactly as Penrose does—and it’s to Penrose’s enormous credit that he understands that.

Afterwards, an audience member came up to me and said how much he liked my talk, but added, “a word of advice, from an older scientist: do not become the priest of a new religion of computation and AI.” I replied that I’d take that to heart, but what was interesting was that, when I heard “priest of a new religion,” I’d expected that his warning would be the *exact opposite* *of what it turned out to be*. To wit: “Do not become the priest of a new religion of unclonability, unpredictability, and irreversible decoherence. Stick to computation—i.e., to conscious minds being copyable and predictable exactly like digital computer programs.” I guess there’s no pleasing everyone!

**Coincidental But Not-Wholly-Unrelated Announcement:** My friend Robin Hanson has just released his long-awaited book *The Age of Em: Work, Love, and Life When Robots Rule the Earth*. I read an early review copy of the book, and wrote the following blurb for the jacket:

Robin Hanson is a thinker like no other on this planet: someone so unconstrained by convention, so unflinching in spelling out the consequences of ideas, that even the most cosmopolitan reader is likely to find him as bracing (and head-clearing) as a mouthful of wasabi. Now, in The Age of Em, he’s produced the quintessential Hansonian book, one unlike any other that’s ever been written. Hanson is emphatic that he hasn’t optimized in any way for telling a good story, or for imparting moral lessons about the present: only for maximizing the probability that what he writes will be relevant to the actual future of our civilization. Early in the book, Hanson estimates that probability as 10%. His figure seems about right to me—and if you’re able to understand why that’s unbelievably high praise, then The Age of Em is for you.

Actually, my original blurb compared *The Age of Em* to Asimov’s *Foundation* series, with its loving attention to the sociology and politics of the remote future. But that line got edited out, because the publisher (and Robin) wanted to make crystal-clear that *The Age of Em* is **not** science fiction, but just sober economic forecasting about a future dominated by copyable computer-emulated minds.

I would’ve attempted a real review of *The Age of Em*, but I no longer feel any need to, because Scott Alexander of SlateStarCodex has already hit this one out of the emulated park.

**Second Coincidental But Not-Wholly-Unrelated Announcement:** A reader named Nick Merrill recently came across this old quote of mine from *Quantum Computing Since Democritus*:

In a class I taught at Berkeley, I did an experiment where I wrote a simple little program that would let people type either “f” or “d” and would predict which key they were going to push next. It’s actually very easy to write a program that will make the right prediction about 70% of the time. Most people don’t really know how to type randomly. They’ll have too many alternations and so on. There will be all sorts of patterns, so you just have to build some sort of probabilistic model.

So Nick emailed me to ask whether I remembered how my program worked, and I explained it to him, and he implemented it as a web app, which he calls the “Aaronson Oracle.”

So give it a try! Are *you* ready to test your free will, your Penrosian non-computational powers, your brain’s sensitivity to amplified quantum fluctuations, against the Aaronson Oracle?

**Update:** By popular request, Nick has improved his program so that it shows your previous key presses and its guesses for them. He also fixed a “security flaw”: James Lee noticed that you could use the least significant digit of the program’s percentage correct so far, as a source of pseudorandom numbers that the program couldn’t predict! So now the program only displays its percent correct rounded to the nearest integer.

**Update (June 15):** Penrose’s collaborator Stuart Hameroff has responded in the comments; see here (my reply here) and here.

(-2) **Today’s Bonus Announcement:** Stefan O’Rear says that his Turing machine to search for contradictions in ZFC is now down to 1919 states. If verified, this is an important milestone: our upper bound on the number of Busy Beaver values that are knowable in standard mathematics is now **less** than the number of years since the birth of Christ (indeed, even since the generally-accepted dates for the writing of the Gospels).

Stefan also says that his Not-Quite-Laconic system has yielded a 1008-state Turing machine to search for counterexamples to the Riemann Hypothesis, improving on our 5372 states.

(-1) **Another Bonus Announcement:** Great news, everyone! Using a modified version of Adam Yedidia’s Laconic language (which he calls NQL, for Not Quite Laconic), Stefan O’Rear has now constructed a 5349-state Turing machine that directly searches for contradictions in ZFC (or rather in Metamath, which is known to be equivalent to ZFC), and whose behavior is therefore unprovable in ZFC, assuming ZFC is consistent. This, of course, improves on my and Adam’s state count by 2561 states—but it also fixes the technical issue with needing to assume a large cardinal axiom (SRP) in order to prove that the TM runs forever. Stefan promises further state reductions in the near future.

In other news, Adam has now **verified** the 43-state Turing machine by Jared S that halts iff there’s a counterexample to Goldbach’s Conjecture. The 27-state machine by code golf addict is still being verified.

(0) **Bonus Announcement:** I’ve had half a dozen “Ask Me Anything” sessions on this blog, but today I’m trying something different: a Q&A session on Quora. The way it works is that you vote for your favorite questions; then on Tuesday, I’ll start with the top-voted questions and keep going down the list until I get tired. Fire away! (And thanks to Shreyes Seshasai at Quora for suggesting this.)

(1) When you announce a new result, the worst that can happen is that the result turns out to be wrong, trivial, or already known. The *best* that can happen is that the result quickly becomes obsolete, as other people race to improve it. With my and Adam Yedidia’s work on small Turing machines that elude set theory, we seem to be heading for that best case. Stefan O’Rear wrote a not-quite-Laconic program that just searches directly for contradictions in a system equivalent to ZFC. If we could get his program to compile, it would likely yield a Turing machine with somewhere around 6,000-7,000 states whose behavior was independent of ZFC, and would *also* fix the technical problem with my and Adam’s machine Z, where one needed to assume a large-cardinal axiom called SRP to prove that Z runs forever. While it would require a redesign from the ground up, a 1,000-state machine whose behavior eludes ZFC also seems potentially within reach using Stefan’s ideas. Meanwhile, our 4,888-state machine for Goldbach’s conjecture seems to have been *completely* blown out of the water: first, a commenter named Jared S says he’s directly built a 73-state machine for Goldbach (now down to 43 states); second, a commenter named “code golf addict” claims to have improved on that with a mere 31 states (now down to 27 states). These machines are now publicly posted, but still await detailed verification.

(2) My good friend Jonah Sinick cofounded Signal Data Science, a data-science summer school that will be running for the second time this summer. They operate on an extremely interesting model, which I’m guessing might spread more widely: tuition is free, but you pay 10% of your first year’s salary after finding a job in the tech sector. He asked me to advertise them, so—here!

(3) I was sad to read the news that Uber and Lyft will be suspending all service in Austin, because the city passed an ordinance requiring their drivers to get fingerprint background checks, and imposing other regulations that Uber and Lyft argue are incompatible with their model of part-time drivers. The companies, of course, are also trying to send a clear message to other cities about what will happen if they don’t get the regulatory environment they want. To me, the truth of the matter is that Uber/Lyft are like the web, Google, or smartphones: clear, once-per-decade quality-of-life advances that you try once, and then no longer understand how you survived without. So if Austin wants to maintain a reputation as a serious, modern city, it has *no choice* but to figure out some way to bring these companies back to the negotiating table. On the other hand, I’d also say to Uber and Lyft that, even if they needed to raise fares to taxi levels to comply with the new regulations, I expect they’d *still* do a brisk business!

For me, the “value proposition” of Uber has almost nothing to do with the lower fares, even though they’re lower. For me, it’s simply about *being able to get from one place to another* without needing to drive and park, and also without needing desperately to explain where you are, over and over, to a taxi dispatcher who sounds angry that you called and who doesn’t understand you because of a combination of language barriers and poor cellphone reception and your own inability to articulate your location. And then wondering when and if your taxi will ever show up, because the dispatcher couldn’t promise a specific time, or hung up on you before you could ask them. And then embarking on a second struggle, to explain to the driver where you’re going, or at least convince them to follow the Google Maps directions. And then dealing with the fact that the driver has no change, you only have twenties and fifties, and their little machine that prints receipts is out of paper so you can’t submit your trip for reimbursement either.

So yes, I really hope Uber, Lyft, and the city of Austin manage to sort this out before Dana and I move there! On the other hand, I should say that there’s another part of the new ordinance—namely, requiring Uber and Lyft cars to be labeled—that strikes me as an unalloyed good. For if there’s one way in which Uber is *less* convenient than taxis, it’s that *you can never figure out which car is your Uber*, among all the cars stopping or slowing down near you that look vaguely like the one in the app.