Archive for the ‘Quantum’ Category

My 5-minute quantum computing talk at the White House

Tuesday, October 25th, 2016

(OK, technically it was in the Eisenhower Executive Office Building, which is not exactly the White House itself, but is adjacent to the West Wing in the White House complex.  And President Obama wasn’t there—maybe, like Justin Trudeau, he already knows everything about quantum computing?  But lots of people from the Office of Science and Technology Policy were!  And some of us talked with Valerie Jarrett, Obama’s adviser, when she passed us on her way to the West Wing.

The occasion was a Quantum Information Science policy workshop that OSTP held, and which the White House explicitly gave us permission to discuss on social media.  Indeed, John Preskill already tweeted photos from the event.  Besides me and Preskill, others in attendance included Umesh Vazirani, Seth Lloyd, Yaoyun Shi, Rob Schoelkopf, Krysta Svore, Hartmut Neven, Stephen Jordan…

I don’t know whether this is the first time that the polynomial hierarchy, or the notion of variation distance, were ever invoked in a speech at the White House.  But in any case, I was proud to receive a box of Hershey Kisses bearing the presidential seal.  I thought of not eating them, but then I got hungry, and realized that I can simply refill the box later if desired.

For regular readers of Shtetl-Optimized, my talk won’t have all that much that’s new, but in any case it’s short.

Incidentally, during the workshop, a guy from OSTP told me that, when he and others at the White House were asked to prepare materials about quantum computing, posts on Shtetl-Optimized (such as Shor I’ll Do It) were a huge help.  Honored though I was to have “served my country,” I winced, thinking about all the puerile doofosities I might’ve self-censored had I had any idea who might read them.  I didn’t dare ask whether anyone at the White House also reads the comment sections!

Thanks so much to all the other participants and to the organizers for a great workshop.  –SA)

Quantum Supremacy

by Scott Aaronson (UT Austin)

October 18, 2016

Thank you; it’s great to be here.  There are lots of directions that excite me enormously right now in quantum computing theory, which is what I work on.  For example, there’s the use of quantum computing to get new insight into classical computation, into condensed matter physics, and recently, even into the black hole information problem.

But since I have five minutes, I wanted to talk here about one particular direction—one that, like nothing else that I know of, bridges theory and experiment in the service of what we hope will be a spectacular result in the near future.  This direction is what’s known as “Quantum Supremacy”—John [Preskill], did you help popularize that term?  [John nods yes]—although some people have been backing away from the term recently, because of the campaign of one of the possible future occupants of this here complex.

But what quantum supremacy means to me, is demonstrating a quantum speedup for some task as confidently as possible.  Notice that I didn’t say a useful task!  I like to say that for me, the #1 application of quantum computing—more than codebreaking, machine learning, or even quantum simulation—is just disproving the people who say quantum computing is impossible!  So, quantum supremacy targets that application.

What is important for quantum supremacy is that we solve a clearly defined problem, with some relationship between inputs and outputs that’s independent of whatever hardware we’re using to solve the problem.  That’s part of why it doesn’t cut it to point to some complicated, hard-to-simulate molecule and say “aha!  quantum supremacy!”

One discovery, which I and others stumbled on 7 or 8 years ago, is that quantum supremacy seems to become much easier to demonstrate if we switch from problems with a single valid output to sampling problems: that is, problems of sampling exactly or approximately from some specified probability distribution.

Doing this has two advantages.  First, we no longer need a full, fault-tolerant quantum computer—in fact, very rudimentary types of quantum hardware appear to suffice.  Second, we can design sampling problems for which we can arguably be more confident that they really are hard for a classical computer, than we are that (say) factoring is classically hard.  I like to say that a fast classical factoring algorithm might collapse the world’s electronic commerce, but as far as we know, it wouldn’t collapse the polynomial hierarchy!  But with sampling problems, at least with exact sampling, we can often show the latter implication, which is about the best evidence you can possibly get for such a problem being hard in the present state of mathematics.

One example of these sampling tasks that we think are classically hard is BosonSampling, which Alex Arkhipov and I proposed in 2011.  BosonSampling uses a bunch of identical photons that are sent through a network of beamsplitters, then measured to count the number of photons in each output mode.  Over the past few years, this proposal has been experimentally demonstrated by quantum optics groups around the world, with the current record being a 6-photon demonstration by the O’Brien group in Bristol, UK.  A second example is the IQP (“Instantaneous Quantum Polynomial-Time”) or Commuting Hamiltonians model of Bremner, Jozsa, and Shepherd.

A third example—no doubt the simplest—is just to sample from the output distribution of a random quantum circuit, let’s say on a 2D square lattice of qubits with nearest-neighbor interactions.  Notably, this last task is one that the Martinis group at Google is working toward achieving right now, with 40-50 qubits.  They say that they’ll achieve it in as little as one or two years, which translated from experimental jargon, means maybe five years?  But not infinity years.

The challenges on the experimental side are clear: get enough qubits with long enough coherence times to achieve this.  But there are also some huge theoretical challenges remaining.

A first is, can we still solve classically hard sampling problems even in the presence of realistic experimental imperfections?  Arkhipov and I already thought about that problem—in particular, about sampling from a distribution that’s merely close in variation distance to the BosonSampling one—and got results that admittedly weren’t as satisfactory as the results for exact sampling.  But I’m delighted to say that, just within the last month or two, there have been some excellent new papers on the arXiv that tackle exactly this question, with both positive and negative results.

A second theoretical challenge is, how do we verify the results of a quantum supremacy experiment?  Note that, as far as we know today, verification could itself require classical exponential time.  But that’s not the showstopper that some people think, since we could target the “sweet spot” of 40-50 qubits, where classical verification is difficult (and in particular, clearly “costlier” than running the experiment itself), but also far from impossible with cluster computing resources.

If I have any policy advice, it’s this: recognize that a clear demonstration of quantum supremacy is at least as big a deal as (say) the discovery of the Higgs boson.  After this scientific milestone is achieved, I predict that the whole discussion of commercial applications of quantum computing will shift to a new plane, much like the Manhattan Project shifted to a new plane after Fermi built his pile under the Chicago stadium in 1942.  In other words: at this point, the most “applied” thing to do might be to set applications aside temporarily, and just achieve this quantum supremacy milestone—i.e., build the quantum computing Fermi pile—and thereby show the world that quantum computing speedups are a reality.  Thank you.

Avi Wigderson’s “Permanent” Impact on Me

Wednesday, October 12th, 2016

The following is the lightly-edited transcript of a talk that I gave a week ago, on Wednesday October 5, at Avi Wigderson’s 60th birthday conference at the Institute for Advanced Study in Princeton.  Videos of all the talks (including mine) are now available here.

Thanks so much to Sanjeev Arora, Boaz Barak, Ran Raz, Peter Sarnak, and Amir Shpilka for organizing the conference and for inviting me to speak; to all the other participants and speakers for a great conference; and of course to Avi himself for being Avi. –SA

I apologize that I wasn’t able to prepare slides for today’s talk. But the good news is that, ever since I moved to Texas two months ago, I now carry concealed chalk everywhere I go. [Pull chalk out of pocket]

My history with Avi goes back literally half my life. I spent a semester with him at Hebrew University, and then a year with him as a postdoc here at IAS. Avi has played a bigger role in my career than just about anyone—he helped me professionally, he helped me intellectually, and once I dated and then married an Israeli theoretical computer scientist (who was also a postdoc in Avi’s group), Avi even helped me learn Hebrew. Just today, Avi taught me the Hebrew word for the permanent of a matrix. It turns out that it’s [throaty noises] pehhrmahnent.

But it all started with a talk that Avi gave in Princeton in 2000, which I attended as a prospective graduate student. That talk was about the following function of an n×n matrix A∈Rn×n, the permanent:

$$ \text{Per}(A) = \sum_{\sigma \in S_n} \prod_{i=1}^n a_{i,\sigma(i)}. $$

Avi contrasted that function with a superficially similar function, the determinant:

$$ \text{Det}(A) = \sum_{\sigma \in S_n} (-1)^{\text{sgn}(\sigma)} \prod_{i=1}^n a_{i,\sigma(i)}. $$

In this talk, I want to share a few of the amazing things Avi said about these two functions, and how the things he said then reverberated through my entire career.

Firstly, like we all learn in kindergarten or whatever, the determinant is computable in polynomial time, for example by using Gaussian elimination. By contrast, Valiant proved in 1979 that computing the permanent is #P-complete—which means, at least as hard as any combinatorial counting problem, a class believed to be even harder than NP-complete.

So, despite differing from each other only by some innocent-looking -1 factors, which the determinant has but the permanent lacks, these two functions effectively engage different regions of mathematics. The determinant is linear-algebraic and geometric; it’s the product of the eigenvalues and the volume of the parallelipiped defined by the row vectors. But the permanent is much more stubbornly combinatorial. It’s not quite as pervasive in mathematics as the determinant is, though it does show up. As an example, if you have a bipartite graph G, then the permanent of G’s adjacency matrix counts the number of perfect matchings in G.

When n=2, computing the permanent doesn’t look too different from computing the determinant: indeed, we have

a & b\\
c & d
\right) =\text{Det}\left(
a & -b\\
c & d

But as n gets larger, the fact that the permanent is #P-complete means that it must get exponentially harder to compute than the determinant, unless basic complexity classes collapse. And indeed, to this day, the fastest known algorithm to compute an n×n permanent, Ryser’s algorithm, takes O(n2n) time, which is only modestly better than the brute-force algorithm that just sums all n! terms.

Yet interestingly, not everything about the permanent is hard. So for example, if A is nonnegative, then in 1997, Jerrum, Sinclair, and Vigoda famously gave a polynomial-time randomized algorithm to approximate Per(A) to within a 1+ε multiplicative factor, for ε>0 as small as you like. As an even simpler example, if A is nonnegative and you just want to know whether its permanent is zero or nonzero, that’s equivalent to deciding whether a bipartite graph has at least one perfect matching. And we all know that that can be done in polynomial time.

OK, but the usual algorithm from the textbooks that puts the matching problem in the class P is already a slightly nontrivial one—maybe first grade rather than kindergarten! It involves repeatedly using depth-first search to construct augmenting paths, then modifying the graph, etc. etc.

Sixteen years ago in Princeton, the first thing Avi said that blew my mind is that there’s a vastly simpler polynomial-time algorithm to decide whether a bipartite graph has a perfect matching—or equivalently, to decide whether a nonnegative matrix has a zero or nonzero permanent. This algorithm is not quite as efficient as the textbook one, but it makes up for it by being more magical.

So here’s what you do: you start with the 0/1 adjacency matrix of your graph. I’ll do a 2×2 example, since that’s all I’ll be able to compute on the fly:

$$ \left(
1 & 1\\
0 & 1
\right). $$

Then you normalize each row so it sums to 1. In the above example, this would give

$$ \left(
\frac{1}{2} & \frac{1}{2} \\
0 & 1
\right). $$

Next you normalize each column so it sums to 1:

$$ \left(
1 & \frac{1}{3} \\
0 & \frac{2}{3}
\right). $$

OK, but now the problem is that the rows are no longer normalized, so you normalize them again:

$$ \left(
\frac{3}{4} & \frac{1}{4} \\
0 & 1
\right). $$

Then you normalize the columns again, and so on. You repeat something like n3log(n) times. If, after that time, all the row sums and column sums have become within ±1/n of 1, then you conclude that the permanent was nonzero and the graph had a perfect matching. Otherwise, the permanent was zero and the graph had no perfect matching.

What gives? Well, it’s a nice exercise to prove why this works. I’ll just give you a sketch: first, when you multiply any row or column of a matrix by a scalar, you multiply the permanent by that same scalar. By using that fact, together with the arithmetic-geometric mean inequality, it’s possible to prove that, in every iteration but the first, when you rebalance all the rows or all the columns to sum to 1, you can’t be decreasing the permanent. The permanent increases monotonically.

Second, if the permanent is nonzero, then after the first iteration it’s at least 1/nn, simply because we started with a matrix of 0’s and 1’s.

Third, the permanent is always at most the product of the row sums or the product of the column sums, which after the first iteration is 1.

Fourth, at any iteration where there’s some row sum or column sum that’s far from 1, rescaling must not only increase the permanent, but increase it by an appreciable amount—like, a 1+1/n2 factor or so.

Putting these four observations together, we find that if the permanent is nonzero, then our scaling procedure must terminate after a polynomial number of steps, with every row sum and every column sum close to 1—since otherwise, the permanent would just keep on increasing past its upper bound of 1.

But a converse statement is also true. Suppose the matrix can be scaled so that every row sum and every column sum gets within ±1/n of 1. Then the rescaled entries define a flow through the bipartite graph, with roughly the same amount of flow through each of the 2n vertices. But if such a flow exists, then Hall’s Theorem tells you that there must be a perfect matching (and hence the permanent must be nonzero)—since if a matching didn’t exist, then there would necessarily be a bottleneck preventing the flow.

Together with Nati Linial and Alex Samorodnitsky, Avi generalized this to show that scaling the rows and columns gives you a polynomial-time algorithm to approximate the permanent of a nonnegative matrix. This basically follows from the so-called Egorychev-Falikman Theorem, which says that the permanent of a doubly stochastic matrix is at least n!/nn. The approximation ratio that you get this way, ~en, isn’t nearly as good as Jerrum-Sinclair-Vigoda’s, but the advantage is that the algorithm is deterministic (and also ridiculously simple).

For myself, though, I just filed away this idea of matrix scaling for whenever I might need it. It didn’t take long. A year after Avi’s lecture, when I was a beginning grad student at Berkeley, I was obsessing about the foundations of quantum mechanics. Specifically, I was obsessing about the fact that, when you measure a quantum state, the rules of quantum mechanics tell you how to calculate the probability that you’ll see a particular outcome. But the rules are silent about so-called multiple-time or transition probabilities. In other words: suppose we adopt Everett’s Many-Worlds view, according to which quantum mechanics needs to be applied consistently to every system, regardless of scale, so in particular, the state of the entire universe (including us) is a quantum superposition state. We perceive just one branch, but there are also these other branches where we made different choices or where different things happened to us, etc.

OK, fine: for me, that’s not the troubling part! The troubling part is that quantum mechanics rejects as meaningless questions like the following: given that you’re in this branch of the superposition at time t1, what’s the probability that you’ll be in that branch at time t2, after some unitary transformation is applied? Orthodox quantum mechanics would say: well, either someone measured you at time t1, in which case their act of measuring collapsed the superposition and created a whole new situation. Or else no one measured at t1—but in that case, your state at time t1 was the superposition state, full stop. It’s sheer metaphysics to imagine a “real you” that jumps around from one branch of the superposition to another, having a sequence of definite experiences.

Granted, in practice, branches of the universe’s superposition that split from each other tend never to rejoin, for the same thermodynamic reasons why eggs tend never to unscramble themselves. And as long as the history of the Everettian multiverse has the structure of a tree, we can sensibly define transition probabilities. But if, with some technology of the remote future, we were able to do quantum interference experiments on human brains (or other conscious entities), the rules of quantum mechanics would no longer predict what those beings should see—not even probabilistically.

I was interested in the question: suppose we just wanted to postulate transition probabilities, with the transitions taking place in some fixed orthogonal basis. What would be a mathematically reasonable way to do that? And it occurred to me that one thing you could do is the following. Suppose for simplicity that you have a pure quantum state, which is just a unit vector of n complex numbers called amplitudes:

$$ \left(
\right) $$

Then the first rule of quantum mechanics says that you can apply any unitary transformation U (that is, any norm-preserving linear transformation) to map this state to a new one:

$$ \left(
\right) =\left(
u_{11} & \cdots & u_{1n}\\
\vdots & \ddots & \vdots\\
u_{n1} & \cdots & u_{nn}%
\right) \left(
\right). $$

The second rule of quantum mechanics, the famous Born Rule, says that if you measure in the standard basis before applying U, then the probability that you’ll find youself in state i equals |αi|2. Likewise, if you measure in the standard basis after applying U, the probability that you’ll find youself in state j equals |βj|2.

OK, but what’s the probability that you’re in state i at the initial time, and then state j at the final time? These joint probabilities, call them pij, had better add up to |αi|2 and |βj|2, if we sum the rows and columns respectively. And ideally, they should be “derived” in some way from the unitary U—so that for example, if uij=0 then pij=0 as well.

So here’s something you could do: start by replacing each uij by its absolute value, to get a nonnegative matrix. Then, normalize the ith row so that it sums to |αi|2, for each i. Then normalize the jth column so that it sums to |βj|2, for each j. Then normalize the rows, then the columns, and keep iterating until hopefully you end up with all the rows and columns having the right sums.

So the first question I faced was, does this process converge? And I remembered what Avi taught me about the permanent. In this case, because of the nonuniform row and column scalings, the permanent no longer works as a progress measure, but there’s something else that does work. Namely, as a first step, we can use the Max-Flow/Min-Cut Theorem to show that there exists a nonnegative matrix F=(fij) such that fij=0 whenever uij=0, and also

$$ \sum_j f_{ij} = \left|\alpha_i\right|^2 \forall i,\ \ \ \ \ \sum_i f_{ij} = \left|\beta_j\right|^2 \forall j. $$

Then, letting M=(mij) be our current rescaled matrix (so that initially mij:=|uij|), we use

$$ \prod_{i,j : u_{ij}\ne 0} m_{ij}^{f_{ij}} $$

as our progress measure. By using the nonnegativity of the Kullback-Leibler divergence, one can prove that this quantity never decreases. So then, just like with 0/1 matrices and the permanent, we get eventual convergence, and indeed convergence after a number of iterations that’s polynomial in n.

I was pretty stoked about this until I went to the library, and discovered that Erwin Schrödinger had proposed the same matrix scaling process in 1931! And Masao Nagasawa and others then rigorously analyzed it. OK, but their motivations were somewhat different, and for some reason they never talked about finite-dimensional matrices, only infinite-dimensional ones.

I can’t resist telling you my favorite open problem about this matrix scaling process: namely, is it stable under small perturbations? In other words, if I change one of the αi‘s or uij‘s by some small ε, then do the final pij‘s also change by at most some small δ? To clarify, several people have shown me how to prove that the mapping to the pij‘s is continuous. But for computer science applications, one needs something stronger: namely that when the matrix M, and the row and column scalings, actually arise from a unitary matrix in the way above, we get strong uniform continuity, with a 1/nO(1) change to the inputs producing only a 1/nO(1) change to the outputs (and hopefully even better than that).

The more general idea that I was groping toward or reinventing here is called a hidden-variable theory, of which the most famous example is Bohmian mechanics. Again, though, Bohmian mechanics has the defect that it’s only formulated for some exotic state space that the physicists care about for some reason—a space involving pointlike objects called “particles” that move around in 3 Euclidean dimensions (why 3? why not 17?).

Anyway, this whole thing led me to wonder: under the Schrödinger scaling process, or something like it, what’s the computational complexity of sampling an entire history of the hidden variable through a quantum computation? (“If, at the moment of your death, your whole life history flashes before you in an instant, what can you then efficiently compute?”)

Clearly the complexity is at least BQP (i.e., quantum polynomial time), because even sampling where the hidden variable is at a single time is equivalent to sampling the output distribution of a quantum computer. But could the complexity be even more than BQP, because of the correlations between the hidden variable values at different times? I noticed that, indeed, sampling a hidden variable history would let you do some crazy-seeming things, like solve the Graph Isomorphism problem in polynomial time (OK, fine, that seemed more impressive at the time than it does after Babai’s breakthrough), or find collisions in arbitrary cryptographic hash functions, or more generally, solve any problem in the complexity class SZK (Statistical Zero Knowledge).

But you might ask: what evidence do we have that any these problems are hard even for garden-variety quantum computers? As many of you know, it’s widely conjectured today that NP⊄BQP—i.e., that quantum computers can’t solve NP-complete problems in polynomial time. And in the “black box” setting, where all you know how to do is query candidate solutions to your NP-complete problem to check whether they’re valid, it’s been proven that quantum computers don’t give you an exponential speedup: the best they can give is the square-root speedup of Grover’s algorithm.

But for these SZK problems, like finding collisions in hash functions, who the hell knows? So, this is the line of thought that led me to probably the most important thing I did in grad school, the so-called quantum lower bound for collision-finding. That result says that, if (again) your hash function is only accessible as a black box, then a quantum computer can provide at most a polynomial speedup over a classical computer for finding collisions in it (in this case, it turns out, at most a two-thirds power speedup). There are several reasons you might care about that, such as showing that one of the basic building blocks of modern cryptography could still be secure in a world with quantum computers, or proving an oracle separation between SZK and BQP. But my original motivation was just to understand how transition probabilities would change quantum computation.

The permanent has also shown up in a much more direct way in my work on quantum computation. If we go back to Avi’s lecture from 2000, a second thing he said that blew my mind was that apparently, or so he had heard, even the fundamental particles of the universe know something about the determinant and the permanent. In particular, he said, fermions—the matter particles, like the quarks and electrons in this stage—have transition amplitudes that are determinants of matrices. Meanwhile, bosons—the force-carrying particles, like the photons coming from the ceiling that let you see this talk—have transition amplitudes that are permanents of matrices.

Or as Steven Weinberg, one of the great physicists on earth, memorably put it in the first edition of his recent quantum mechanics textbook: “in the case of bosons, it is also a determinant, except without minus signs.” I’ve had the pleasure of getting to know Weinberg at Austin, so recently I asked him about that line. He told me that of course he knew that the determinant without minus signs is called a permanent, but he thought no one else would know! As far as he knew, the permanent was just some esoteric function used by a few quantum field theorists who needed to calculate boson amplitudes.

Briefly, the reason why the permanent and determinant turn up here is the following: whenever you have n particles that are identical, to calculate the amplitude for them to do something, you need to sum over all n! possible permutations of the particles. Furthermore, each contribution to the sum is a product of n complex numbers, one uij for each particle that hops from i to j. But there’s a difference: when you swap two identical bosons, nothing happens, and that’s why bosons give rise to the permanent (of an n×n complex matrix, if there are n bosons). By contrast, when you swap two identical fermions, the amplitude for that state of the universe gets multiplied by -1, and that’s why fermions give rise to the determinant.

Anyway, Avi ended his talk with a quip about how unfair it seemed to the bosons that they should have to work so much harder than the fermions just to calculate where they should be!

And then that one joke of Avi—that way of looking at things—rattled around in my head for a decade, like a song I couldn’t get rid of. It raised the question: wait a minute, bosons—particles that occur in Nature—are governed by a #P-complete function? Does that mean we could actually use bosons to solve #P-complete problems in polynomial time? That seems ridiculous, like the kind of nonsense I’m fighting every few weeks on my blog! As I said before, most of us don’t even expect quantum computers to be able to solve NP-complete problems in polynomial time, let alone #P-complete ones.

As it happens, Troyansky and Tishby had already taken up that puzzle in 1996. (Indeed Avi, being the social butterfly and hub node of our field that he is, had learned about the role of permaments and determinants in quantum mechanics from them.) What Troyansky and Tishby said was, it’s true that if you have a system of n identical, non-interacting bosons, their transition amplitudes are given by permanents of n×n matrices. OK, but amplitudes in quantum mechanics are not directly observable. They’re just what you use to calculate the probability that you’ll see this or that measurement outcome. But if you try to encode a hard instance of a #P-complete problem into a bosonic system, the relevant amplitudes will in general be exponentially small. And that means that, if you want a decent estimate of the permanent, you’ll need to repeat the experiment an exponential number of times. So OK, they said, nice try, but this doesn’t give you a computational advantage after all in calculating the permanent compared to classical brute force.

In our 2011 work on BosonSampling, my student Alex Arkhipov and I reopened the question. We said, not so fast. It’s true that bosons don’t seem to help you in estimating the permanent of a specific matrix of your choice. But what if your goal was just to sample a random n×n matrix A∈Cn×n, in a way that’s somehow biased toward matrices with larger permanents? Now, why would that be your goal? I have no idea! But this sampling is something that a bosonic system would easily let you do.

So, what Arkhipov and I proved was that this gives rise to a class of probability distributions that can be sampled in quantum polynomial time (indeed, by a very rudimentary type of quantum computer), but that can’t be sampled in classical polynomial time unless the polynomial hierarchy collapses to the third level. And even though you’re not solving a #P-complete problem, the #P-completeness of the permanent still plays a crucial role in explaining why the sampling problem is hard. (Basically, one proves that the probabilities are #P-hard even to approximate, but that if there were a fast classical sampling algorithm, then the probabilities could be approximated in the class BPPNP. So if a fast classical sampling algorithm existed, then P#P would equal BPPNP, which would collapse the polynomial hierarchy by Toda’s Theorem.)

When we started on this, Arkhipov and I thought about it as just pure complexity theory—conceptually clarifying what role the #P-completeness of the permanent plays in physics. But then at some point it occurred to us: bosons (such as photons) actually exist, and experimentalists in quantum optics like to play with them, so maybe they could demonstrate some of this stuff in the lab. And as it turned out, the quantum optics people were looking for something to do at the time, and they ate it up.

Over the past five years, a trend has arisen in experimental physics that goes by the name “Quantum Supremacy,” although some people are now backing away from the name because of Trump. The idea is: without yet having a universal quantum computer, can we use the hardware that we’re able to build today to demonstrate the reality of a quantum-computational speedup as clearly as possible? Not necessarily for a useful problem, but just for some problem? Of course, no experiment can prove that something is scaling polynomially rather than exponentially, since that’s an asymptotic statement. But an experiment could certainly raise the stakes for the people who deny such a statement—for example, by solving something a trillion times faster than we know how to solve it otherwise, using methods for which we don’t know a reason for them not to scale.

I like to say that for me, the #1 application of quantum computing, more than breaking RSA or even simulating physics and chemistry, is simply disproving the people who say that quantum computing is impossible! So, quantum supremacy targets that application.

Experimental BosonSampling has become a major part of the race to demonstrate quantum supremacy. By now, at least a half-dozen groups around the world have reported small-scale implementations—the record, so far, being an experiment at Bristol that used 6 photons, and experimentally confirmed that, yes, their transition amplitudes are given by permanents of 6×6 complex matrices. The challenge now is to build single-photon sources that are good enough that you could scale up to (let’s say) 30 photons, which is where you’d really start seeing a quantum advantage over the best known classical algorithms. And again, this whole quest really started with Avi’s joke.

A year after my and Arkhipov’s work, I noticed that one also can run the connection between quantum optics and the permanent in the “reverse” direction. In other words: with BosonSampling, we used the famous theorem of Valiant, that the permanent is #P-complete, to help us argue that bosons can solve hard sampling problems. But if we know by some other means that quantum optics lets us encode #P-complete problems, then we can use that to give an independent, “quantum” proof that the permanent is #P-complete in the first place! As it happens, there is another way to see why quantum optics lets us encode #P-complete problems. Namely, we can use celebrated work by Knill, Laflamme, and Milburn (KLM) from 2001, which showed how to perform universal quantum computation using quantum optics with the one additional resource of “feed-forward measurements.” With minor modifications, the construction by KLM also lets us encode a #P-complete problem into a bosonic amplitude, which we know is a permanent—thereby proving that the permanent is #P-complete, in what I personally regard as a much more intuitive way than Valiant’s original approach based on cycle covers. This illustrates a theme that we’ve seen over and over in the last 13 years or so, which is the use of quantum methods and arguments to gain insight even about classical computation.

Admittedly, I wasn’t proving anything here in classical complexity theory that wasn’t already known, just giving a different proof for an old result! Extremely recently, however, my students Daniel Grier and Luke Schaeffer have extended my argument based on quantum optics, to show that computing the permanent of a unitary or orthogonal matrix is #P-complete. (Indeed, even over finite fields of characteristic k, computing the permanent of an orthogonal matrix is a ModkP-complete problem, as long as k is not 2 or 3—which turns out to be the tight answer.) This is not a result that we previously knew by any means, whether quantum or classical.

I can’t resist telling you the biggest theoretical open problem that arose from my and Arkhipov’s work. We would like to say: even if you had a polynomial-time algorithm that sampled a probability distribution that was merely close, in variation distance, to the BosonSampling distribution, that would already imply a collapse of the polynomial hierarchy. But we’re only able to prove that assuming a certain problem is #P-complete, which no one has been able to prove #P-complete. That problem is the following:

Given an n×n matrix A, each of whose entries is an i.i.d. complex Gaussian with mean 0 and variance 1 (that is, drawn from N(0,1)C), estimate |Per(A)|2, to within additive error ±ε·n!, with probability at least 1-δ over the choice of A. Do this in time polynomial in n, 1/ε, and 1/δ.

Note that, if you care about exactly computing the permanent of a Gaussian random matrix, or about approximating the permanent of an arbitrary matrix, we know how to prove both of those problems #P-complete. The difficulty “only” arises when we combine approximation and average-case in the same problem.

At the moment, we don’t even know something more basic, which is: what’s the distribution over |Per(A)|2, when A is an n×n matrix of i.i.d. N(0,1)C Gaussians? Based on numerical evidence, we conjecture that the distribution converges to lognormal as n gets large. By using the interpretation of the determinant as the volume of a parallelipiped, we can prove that the distribution over |Det(A)|2 converges to lognormal. And the distribution over |Per(A)|2 looks almost the same when you plot it. But not surprisingly, the permanent is harder to analyze.

This brings me to my final vignette. Why would anyone even suspect that approximating the permanent of a Gaussian random matrix would be a #P-hard problem? Well, because if you look at the permanent of an n×n matrix over a large enough finite field, say Fp, that function famously has the property of random self-reducibility. This means: the ability to calculate such a permanent in polynomial time, on 90% all matrices in Fpn×n, or even for that matter on only 1% of them, implies the ability to calculate it in polynomial time on every such matrix.

The reason for this is simply that the permanent is a low-degree polynomial, and low-degree polynomials have extremely useful error-correcting properties. In particular, if you can compute such a polynomial on any large fraction of points, then you can do noisy polynomial interpolation (e.g., the Berlekamp-Welch algorithm, or list decoding), in order to get the value of the polynomial on an arbitrary point.

I don’t specifically remember Avi talking about the random self-reducibility of the permanent in his 2000 lecture, but he obviously would have talked about it! And it was really knowing about the random self-reducibility of the permanent, and how powerful it was, that let me and Alex Arkhipov to the study of BosonSampling in the first place.

In complexity theory, the random self-reducibility of the permanent is hugely important because it sort of the spark for some of our most convincing examples of non-relativizing results—that is, results that fail relative to a suitable oracle. The most famous such result is that #P, and for that matter even PSPACE, admit interactive protocols (the IP=PSPACE theorem). In the 1970s, Baker, Gill, and Solovay pointed out that non-relativizing methods would be needed to resolve P vs. NP and many of the other great problems of the field.

In 2007, Avi and I wrote our only joint paper so far. In that paper, we decided to take a closer look at the non-relativizing results based on interactive proofs. We said: while it’s true that these results don’t relativize—that is, there are oracles relative to which they fail—nevertheless, these results hold relative to all oracles that themselves encode low-degree polynomials over finite fields (such as the permanent). So, introducing a term, Avi and I said that results like IP=PSPACE algebrize.

But then we also showed that, if you want to prove P≠NP—or for that matter, even prove circuit lower bounds that go “slightly” beyond what’s already known (such as NEXPP/poly)—you’ll need techniques that are not only non-relativizing, but also non-algebrizing. So in some sense, the properties of the permanent that are used (for example) in proving that it has an interactive protocol, just “aren’t prying the black box open wide enough.”

I have a more recent result, from 2011 or so, that I never got around to finishing a paper about. In this newer work, I decided to take another look at the question: what is it about the permanent that actually fails to relativize? And I prove the following result: relative to an arbitrary oracle A, the class #P has complete problems that are both random self-reducible and downward self-reducible (that is, reducible to smaller instances of the same problem). So, contrary to what certainly I and maybe others had often thought, it’s not the random self-reducibility of the permanent that’s the crucial thing about it. What’s important, instead, is a further property that the permanent has, of being self-checkable and self-correctible.

In other words: given (say) a noisy circuit for the permanent, it’s not just that you can correct that circuit to compute whichever low-degree polynomial it was close to computing. Rather, it’s that you can confirm that the polynomial is in fact the permanent, and nothing else.

I like the way Ketan Mulmuley thinks about this phenomenon in his Geometric Complexity Theory, which is a speculative, audacious program to try to prove that the permanent is harder than the determinant, and to tackle the other great separation questions of complexity theory (including P vs. NP), by using algebraic geometry and representation theory. Mulmuley says: the permanent is a polynomial in the entries of an n×n matrix that not only satisfies certain symmetries (e.g., under interchanging rows or columns), but is uniquely characterized by those symmetries. In other words, if you find a polynomial that passes certain tests—for example, if it behaves in the right way under rescaling and interchanging rows and columns—then that polynomial must be the permanent, or a scalar multiple of the permanent. Similarly, if you find a polynomial that passes the usual interactive proof for the permanent, that polynomial must be the permanent. I think this goes a long way toward explaining why the permanent is so special: it’s not just any hard-to-compute, low-degree polynomial; it’s one that you can recognize when you come across it.

I’ve now told you about the eventual impact of one single survey talk that Avi gave 16 years ago—not even a particularly major or important one. So you can only imagine what Avi’s impact must have been on all of us, if you integrate over all the talks he’s given and papers he’s written and young people he’s mentored and connections he’s made his entire career. May that impact be permanent.

The No-Cloning Theorem and the Human Condition: My After-Dinner Talk at QCRYPT

Monday, September 19th, 2016

The following are the after-dinner remarks that I delivered at QCRYPT’2016, the premier quantum cryptography conference, on Thursday Sep. 15 in Washington DC.  You could compare to my after-dinner remarks at QIP’2006 to see how much I’ve “”matured”” since then. Thanks so much to Yi-Kai Liu and the other organizers for inviting me and for putting on a really fantastic conference.

It’s wonderful to be here at QCRYPT among so many friends—this is the first significant conference I’ve attended since I moved from MIT to Texas. I do, however, need to register a complaint with the organizers, which is: why wasn’t I allowed to bring my concealed firearm to the conference? You know, down in Texas, we don’t look too kindly on you academic elitists in Washington DC telling us what to do, who we can and can’t shoot and so forth. Don’t mess with Texas! As you might’ve heard, many of us Texans even support a big, beautiful, physical wall being built along our border with Mexico. Personally, though, I don’t think the wall proposal goes far enough. Forget about illegal immigration and smuggling: I don’t even want Americans and Mexicans to be able to win the CHSH game with probability exceeding 3/4. Do any of you know what kind of wall could prevent that? Maybe a metaphysical wall.

OK, but that’s not what I wanted to talk about. When Yi-Kai asked me to give an after-dinner talk, I wasn’t sure whether to try to say something actually relevant to quantum cryptography or just make jokes. So I’ll do something in between: I’ll tell you about research directions in quantum cryptography that are also jokes.

The subject of this talk is a deep theorem that stands as one of the crowning achievements of our field. I refer, of course, to the No-Cloning Theorem. Almost everything we’re talking about at this conference, from QKD onwards, is based in some way on quantum states being unclonable. If you read Stephen Wiesner’s paper from 1968, which founded quantum cryptography, the No-Cloning Theorem already played a central role—although Wiesner didn’t call it that. By the way, here’s my #1 piece of research advice to the students in the audience: if you want to become immortal, just find some fact that everyone already knows and give it a name!

I’d like to pose the question: why should our universe be governed by physical laws that make the No-Cloning Theorem true? I mean, it’s possible that there’s some other reason for our universe to be quantum-mechanical, and No-Cloning is just a byproduct of that. No-Cloning would then be like the armpit of quantum mechanics: not there because it does anything useful, but just because there’s gotta be something under your arms.

OK, but No-Cloning feels really fundamental. One of my early memories is when I was 5 years old or so, and utterly transfixed by my dad’s home fax machine, one of those crappy 1980s fax machines with wax paper. I kept thinking about it: is it really true that a piece of paper gets transmaterialized, sent through a wire, and reconstituted at the other location? Could I have been that wrong about how the universe works? Until finally I got it—and once you get it, it’s hard even to recapture your original confusion, because it becomes so obvious that the world is made not of stuff but of copyable bits of information. “Information wants to be free!”

The No-Cloning Theorem represents nothing less than a partial return to the view of the world that I had before I was five. It says that quantum information doesn’t want to be free: it wants to be private. There is, it turns out, a kind of information that’s tied to a particular place, or set of places. It can be moved around, or even teleported, but it can’t be copied the way a fax machine copies bits.

So I think it’s worth at least entertaining the possibility that we don’t have No-Cloning because of quantum mechanics; we have quantum mechanics because of No-Cloning—or because quantum mechanics is the simplest, most elegant theory that has unclonability as a core principle. But if so, that just pushes the question back to: why should unclonability be a core principle of physics?

Quantum Key Distribution

A first suggestion about this question came from Gilles Brassard, who’s here. Years ago, I attended a talk by Gilles in which he speculated that the laws of quantum mechanics are what they are because Quantum Key Distribution (QKD) has to be possible, while bit commitment has to be impossible. If true, that would be awesome for the people at this conference. It would mean that, far from being this exotic competitor to RSA and Diffie-Hellman that’s distance-limited and bandwidth-limited and has a tiny market share right now, QKD would be the entire reason why the universe is as it is! Or maybe what this really amounts to is an appeal to the Anthropic Principle. Like, if QKD hadn’t been possible, then we wouldn’t be here at QCRYPT to talk about it.

Quantum Money

But maybe we should search more broadly for the reasons why our laws of physics satisfy a No-Cloning Theorem. Wiesner’s paper sort of hinted at QKD, but the main thing it had was a scheme for unforgeable quantum money. This is one of the most direct uses imaginable for the No-Cloning Theorem: to store economic value in something that it’s physically impossible to copy. So maybe that’s the reason for No-Cloning: because God wanted us to have e-commerce, and didn’t want us to have to bother with blockchains (and certainly not with credit card numbers).

The central difficulty with quantum money is: how do you authenticate a bill as genuine? (OK, fine, there’s also the dificulty of how to keep a bill coherent in your wallet for more than a microsecond or whatever. But we’ll leave that for the engineers.)

In Wiesner’s original scheme, he solved the authentication problem by saying that, whenever you want to verify a quantum bill, you bring it back to the bank that printed it. The bank then looks up the bill’s classical serial number in a giant database, which tells the bank in which basis to measure each of the bill’s qubits.

With this system, you can actually get information-theoretic security against counterfeiting. OK, but the fact that you have to bring a bill to the bank to be verified negates much of the advantage of quantum money in the first place. If you’re going to keep involving a bank, then why not just use a credit card?

That’s why over the past decade, some of us have been working on public-key quantum money: that is, quantum money that anyone can verify. For this kind of quantum money, it’s easy to see that the No-Cloning Theorem is no longer enough: you also need some cryptographic assumption. But OK, we can consider that. In recent years, we’ve achieved glory by proposing a huge variety of public-key quantum money schemes—and we’ve achieved even greater glory by breaking almost all of them!

After a while, there were basically two schemes left standing: one based on knot theory by Ed Farhi, Peter Shor, et al. That one has been proven to be secure under the assumption that it can’t be broken. The second scheme, which Paul Christiano and I proposed in 2012, is based on hidden subspaces encoded by multivariate polynomials. For our scheme, Paul and I were able to do better than Farhi et al.: we gave a security reduction. That is, we proved that our quantum money scheme is secure, unless there’s a polynomial-time quantum algorithm to find hidden subspaces encoded by low-degree multivariate polynomials (yadda yadda, you can look up the details) with much greater success probability than we thought possible.

Today, the situation is that my and Paul’s security proof remains completely valid, but meanwhile, our money is completely insecure! Our reduction means the opposite of what we thought it did. There is a break of our quantum money scheme, and as a consequence, there’s also a quantum algorithm to find large subspaces hidden by low-degree polynomials with much better success probability than we’d thought. What happened was that first, some French algebraic cryptanalysts—Faugere, Pena, I can’t pronounce their names—used Gröbner bases to break the noiseless version of scheme, in classical polynomial time. So I thought, phew! At least I had acceded when Paul insisted that we also include a noisy version of the scheme. But later, Paul noticed that there’s a quantum reduction from the problem of breaking our noisy scheme to the problem of breaking the noiseless one, so the former is broken as well.

I’m choosing to spin this positively: “we used quantum money to discover a striking new quantum algorithm for finding subspaces hidden by low-degree polynomials. Err, yes, that’s exactly what we did.”

But, bottom line, until we manage to invent a better public-key quantum money scheme, or otherwise sort this out, I don’t think we’re entitled to claim that God put unclonability into our universe in order for quantum money to be possible.

Copy-Protected Quantum Software

So if not money, then what about its cousin, copy-protected software—could that be why No-Cloning holds? By copy-protected quantum software, I just mean a quantum state that, if you feed it into your quantum computer, lets you evaluate some Boolean function on any input of your choice, but that doesn’t let you efficiently prepare more states that let the same function be evaluated. I think this is important as one of the preeminent evil applications of quantum information. Why should nuclear physicists and genetic engineers get a monopoly on the evil stuff?

OK, but is copy-protected quantum software even possible? The first worry you might have is that, yeah, maybe it’s possible, but then every time you wanted to run the quantum program, you’d have to make a measurement that destroyed it. So then you’d have to go back and buy a new copy of the program for the next run, and so on. Of course, to the software company, this would presumably be a feature rather than a bug!

But as it turns out, there’s a fact many of you know—sometimes called the “Gentle Measurement Lemma,” other times the “Almost As Good As New Lemma”—which says that, as long as the outcome of your measurement on a quantum state could be predicted almost with certainty given knowledge of the state, the measurement can be implemented in such a way that it hardly damages the state at all. This tells us that, if quantum money, copy-protected quantum software, and the other things we’re talking about are possible at all, then they can also be made reusable. I summarize the principle as: “if rockets, then space shuttles.”

Much like with quantum money, one can show that, relative to a suitable oracle, it’s possible to quantumly copy-protect any efficiently computable function—or rather, any function that’s hard to learn from its input/output behavior. Indeed, the implementation can be not only copy-protected but also obfuscated, so that the user learns nothing besides the input/output behavior. As Bill Fefferman pointed out in his talk this morning, the No-Cloning Theorem lets us bypass Barak et al.’s famous result on the impossibility of obfuscation, because their impossibility proof assumed the ability to copy the obfuscated program.

Of course, what we really care about is whether quantum copy-protection is possible in the real world, with no oracle. I was able to give candidate implementations of quantum copy-protection for extremely special functions, like one that just checks the validity of a password. In the general case—that is, for arbitrary programs—Paul Christiano has a beautiful proposal for how to do it, which builds on our hidden-subspace money scheme. Unfortunately, since our money scheme is currently in the shop being repaired, it’s probably premature to think about the security of the much more complicated copy-protection scheme! But these are wonderful open problems, and I encourage any of you to come and scoop us. Once we know whether uncopyable quantum software is possible at all, we could then debate whether it’s the “reason” for our universe to have unclonability as a core principle.

Unclonable Proofs and Advice

Along the same lines, I can’t resist mentioning some favorite research directions, which some enterprising student here could totally turn into a talk at next year’s QCRYPT.

Firstly, what can we say about clonable versus unclonable quantum proofs—that is, QMA witness states? In other words: for which problems in QMA can we ensure that there’s an accepting witness that lets you efficiently create as many additional accepting witnesses as you want? (I mean, besides the QCMA problems, the ones that have short classical witnesses?) For which problems in QMA can we ensure that there’s an accepting witness that doesn’t let you efficiently create any additional accepting witnesses? I do have a few observations about these questions—ask me if you’re interested—but on the whole, I believe almost anything one can ask about them remains open.

Admittedly, it’s not clear how much use an unclonable proof would be. Like, imagine a quantum state that encoded a proof of the Riemann Hypothesis, and which you would keep in your bedroom, in a glass orb on your nightstand or something. And whenever you felt your doubts about the Riemann Hypothesis resurfacing, you’d take the state out of its orb and measure it again to reassure yourself of RH’s truth. You’d be like, “my preciousssss!” And no one else could copy your state and thereby gain the same Riemann-faith-restoring powers that you had. I dunno, I probably won’t hawk this application in a DARPA grant.

Similarly, one can ask about clonable versus unclonable quantum advice states—that is, initial states that are given to you to boost your computational power beyond that of an ordinary quantum computer. And that’s also a fascinating open problem.

OK, but maybe none of this quite gets at why our universe has unclonability. And this is an after-dinner talk, so do you want me to get to the really crazy stuff? Yes?

Self-Referential Paradoxes

OK! What if unclonability is our universe’s way around the paradoxes of self-reference, like the unsolvability of the halting problem and Gödel’s Incompleteness Theorem? Allow me to explain what I mean.

In kindergarten or wherever, we all learn Turing’s proof that there’s no computer program to solve the halting problem. But what isn’t usually stressed is that that proof actually does more than advertised. If someone hands you a program that they claim solves the halting problem, Turing doesn’t merely tell you that that person is wrong—rather, he shows you exactly how to expose the person as a jackass, by constructing an example input on which their program fails. All you do is, you take their claimed halt-decider, modify it in some simple way, and then feed the result back to the halt-decider as input. You thereby create a situation where, if your program halts given its own code as input, then it must run forever, and if it runs forever then it halts. “WHOOOOSH!” [head-exploding gesture]

OK, but now imagine that the program someone hands you, which they claim solves the halting problem, is a quantum program. That is, it’s a quantum state, which you measure in some basis depending on the program you’re interested in, in order to decide whether that program halts. Well, the truth is, this quantum program still can’t work to solve the halting problem. After all, there’s some classical program that simulates the quantum one, albeit less efficiently, and we already know that the classical program can’t work.

But now consider the question: how would you actually produce an example input on which this quantum program failed to solve the halting problem? Like, suppose the program worked on every input you tried. Then ultimately, to produce a counterexample, you might need to follow Turing’s proof and make a copy of the claimed quantum halt-decider. But then, of course, you’d run up against the No-Cloning Theorem!

So we seem to arrive at the conclusion that, while of course there’s no quantum program to solve the halting problem, there might be a quantum program for which no one could explicitly refute that it solved the halting problem, by giving a counterexample.

I was pretty excited about this observation for a day or two, until I noticed the following. Let’s suppose your quantum program that allegedly solves the halting problem has n qubits. Then it’s possible to prove that the program can’t possibly be used to compute more than, say, 2n bits of Chaitin’s constant Ω, which is the probability that a random program halts. OK, but if we had an actual oracle for the halting problem, we could use it to compute as many bits of Ω as we wanted. So, suppose I treated my quantum program as if it were an oracle for the halting problem, and I used it to compute the first 2n bits of Ω. Then I would know that, assuming the truth of quantum mechanics, the program must have made a mistake somewhere. There would still be something weird, which is that I wouldn’t know on which input my program had made an error—I would just know that it must’ve erred somewhere! With a bit of cleverness, one can narrow things down to two inputs, such that the quantum halt-decider must have erred on at least one of them. But I don’t know whether it’s possible to go further, and concentrate the wrongness on a single query.

We can play a similar game with other famous applications of self-reference. For example, suppose we use a quantum state to encode a system of axioms. Then that system of axioms will still be subject to Gödel’s Incompleteness Theorem (which I guess I believe despite the umlaut). If it’s consistent, it won’t be able to prove all the true statements of arithmetic. But we might never be able to produce an explicit example of a true statement that the axioms don’t prove. To do so we’d have to clone the state encoding the axioms and thereby violate No-Cloning.

Personal Identity

But since I’m a bit drunk, I should confess that all this stuff about Gödel and self-reference is just a warmup to what I really wanted to talk about, which is whether the No-Cloning Theorem might have anything to do with the mysteries of personal identity and “free will.” I first encountered this idea in Roger Penrose’s book, The Emperor’s New Mind. But I want to stress that I’m not talking here about the possibility that the brain is a quantum computer—much less about the possibility that it’s a quantum-gravitational hypercomputer that uses microtubules to solve the halting problem! I might be drunk, but I’m not that drunk. I also think that the Penrose-Lucas argument, based on Gödel’s Theorem, for why the brain has to work that way is fundamentally flawed.

But here I’m talking about something different. See, I have a lot of friends in the Singularity / Friendly AI movement. And I talk to them whenever I pass through the Bay Area, which is where they congregate. And many of them express great confidence that before too long—maybe in 20 or 30 years, maybe in 100 years—we’ll be able to upload ourselves to computers and live forever on the Internet (as opposed to just living 70% of our lives on the Internet, like we do today).

This would have lots of advantages. For example, any time you were about to do something dangerous, you’d just make a backup copy of yourself first. If you were struggling with a conference deadline, you’d spawn 100 temporary copies of yourself. If you wanted to visit Mars or Jupiter, you’d just email yourself there. If Trump became president, you’d not run yourself for 8 years (or maybe 80 or 800 years). And so on.

Admittedly, some awkward questions arise. For example, let’s say the hardware runs three copies of your code and takes a majority vote, just for error-correcting purposes. Does that bring three copies of you into existence, or only one copy? Or let’s say your code is run homomorphically encrypted, with the only decryption key stored in another galaxy. Does that count? Or you email yourself to Mars. If you want to make sure that you’ll wake up on Mars, is it important that you delete the copy of your code that remains on earth? Does it matter whether anyone runs the code or not? And what exactly counts as “running” it? Or my favorite one: could someone threaten you by saying, “look, I have a copy of your code, and if you don’t do what I say, I’m going to make a thousand copies of it and subject them all to horrible tortures?”

The issue, in all these cases, is that in a world where there could be millions of copies of your code running on different substrates in different locations—or things where it’s not even clear whether they count as a copy or not—we don’t have a principled way to take as input a description of the state of the universe, and then identify where in the universe you are—or even a probability distribution over places where you could be. And yet you seem to need such a way in order to make predictions and decisions.

A few years ago, I wrote this gigantic, post-tenure essay called The Ghost in the Quantum Turing Machine, where I tried to make the point that we don’t know at what level of granularity a brain would need to be simulated in order to duplicate someone’s subjective identity. Maybe you’d only need to go down to the level of neurons and synapses. But if you needed to go all the way down to the molecular level, then the No-Cloning Theorem would immediately throw a wrench into most of the paradoxes of personal identity that we discussed earlier.

For it would mean that there were some microscopic yet essential details about each of us that were fundamentally uncopyable, localized to a particular part of space. We would all, in effect, be quantumly copy-protected software. Each of us would have a core of unpredictability—not merely probabilistic unpredictability, like that of a quantum random number generator, but genuine unpredictability—that an external model of us would fail to capture completely. Of course, by having futuristic nanorobots scan our brains and so forth, it would be possible in principle to make extremely realistic copies of us. But those copies necessarily wouldn’t capture quite everything. And, one can speculate, maybe not enough for your subjective experience to “transfer over.”

Maybe the most striking aspect of this picture is that sure, you could teleport yourself to Mars—but to do so you’d need to use quantum teleportation, and as we all know, quantum teleportation necessarily destroys the original copy of the teleported state. So we’d avert this metaphysical crisis about what to do with the copy that remained on Earth.

Look—I don’t know if any of you are like me, and have ever gotten depressed by reflecting that all of your life experiences, all your joys and sorrows and loves and losses, every itch and flick of your finger, could in principle be encoded by a huge but finite string of bits, and therefore by a single positive integer. (Really? No one else gets depressed about that?) It’s kind of like: given that this integer has existed since before there was a universe, and will continue to exist after the universe has degenerated into a thin gruel of radiation, what’s the point of even going through the motions? You know?

But the No-Cloning Theorem raises the possibility that at least this integer is really your integer. At least it’s something that no one else knows, and no one else could know in principle, even with futuristic brain-scanning technology: you’ll always be able to surprise the world with a new digit. I don’t know if that’s true or not, but if it were true, then it seems like the sort of thing that would be worthy of elevating unclonability to a fundamental principle of the universe.

So as you enjoy your dinner and dessert at this historic Mayflower Hotel, I ask you to reflect on the following. People can photograph this event, they can video it, they can type up transcripts, in principle they could even record everything that happens down to the millimeter level, and post it on the Internet for posterity. But they’re not gonna get the quantum states. There’s something about this evening, like about every evening, that will vanish forever, so please savor it while it lasts. Thank you.

Update (Sep. 20): Unbeknownst to me, Marc Kaplan did video the event and put it up on YouTube! Click here to watch. Thanks very much to Marc! I hope you enjoy, even though of course, the video can’t precisely clone the experience of having been there.

[Note: The part where I raise my middle finger is an inside joke—one of the speakers during the technical sessions inadvertently did the same while making a point, causing great mirth in the audience.]

More Wrong Things I Said in Papers

Friday, July 29th, 2016

Two years ago, I wrote a blog post entitled PostBQP Postscripts, owning up to not one but four substantive mathematical errors that I’d made over the years in my published papers, and which my students and colleagues later brought to my sheepish attention.  Fortunately, none of these errors affected the papers’ main messages; they just added interesting new twists to the story.  Even so, I remember feeling at the time like undergoing this public repentance was soul-cleansing intellectual hygiene.  I also felt like writing one big “post of shame” was easier than writing a bunch of separate errata and submitting them to journals, while also reaching a wider audience (and, therefore, doing an even better soul-cleansing job).

So I resolved that, anytime I’d saved up enough errata, I’d do another sackcloth-and-ashes post.  Which brings us to today.  Without further ado:

I. Quantum Money Falling Down

My and Paul Christiano’s explicit public-key quantum money scheme—the one based on low-degree polynomials—has now been fully broken.  To clarify, our abstract hidden-subspace scheme—the one that uses a classical black-box to test membership in the subspaces—remains totally fine.  Indeed, we unconditionally proved the security of the black-box scheme, and our security proof stands.  In the paper, though, we also stuck our necks out further, and conjectured that you could instantiate the black box, by publishing random low-degree polynomials that vanish on the subspaces you want to hide.  While I considered this superfluous, at Paul’s insistence, we also recommended adding completely-random “noise polynomials” for extra security.

Our scheme was broken in two stages.  First, in 2014, Pena et al. broke the noiseless version of our scheme, using Gröbner-basis methods, over fields of characteristic greater than 2.  Over F2—the field we happened to use in our scheme—Pena et al. couldn’t quite prove that their attack worked, but they gave numerical evidence that at least it finds the subspaces in nO(log n) time.  Note that nothing in Pena et al.’s attack is specific to quantum money: indeed, their attack consists of a purely classical algorithm, which efficiently solves the general classical problem of recovering large subspaces from polynomials that hide them.

At that point, at least the noisy version of our scheme—the one Paul had insisted we include—was still standing!  Indeed, the Gröbner-basis attack seemed to break down entirely when some of the polynomials were random garbage.

Later, though, Paul and Or Sattath realized that a quantum trick—basically, the single-copy tomography of Farhi et al.—can identify which polynomials are the noisy ones, provided we’re given a legitimate quantum money state to start with.  As a consequence, the problem of breaking the noisy scheme can be reduced to the problem of breaking the noiseless scheme—i.e., the problem that Pena et al. already essentially solved.

As bad as this sounds, it has an interesting positive consequence.  In our paper, Paul and I had actually given a security reduction for our money scheme based on low-degree polynomials.  In particular, we showed that there’s no polynomial-time quantum algorithm to counterfeit our money states, unless there’s a polynomial-time quantum algorithm that finds a basis for a subspace S≤F2n of dimension n/2 with Ω(2-n/2) success probability, given a collection of low-degree polynomials p1,…,pm and q1,…,qm (m=O(n)) most of which vanish on S and its dual subspace respectively, but that are otherwise random.  So, running our reduction backwards, the only possible conclusion from the break is that there is such a quantum algorithm!  Yet we would’ve had no idea how to find that quantum algorithm without going through quantum money—nor do we know a classical algorithm for the problem, or even a quantum algorithm with Ω(1) success probability.

In the meantime, the problem of designing a public-key quantum money scheme, with good cryptographic evidence for its security, remains open.  It’s plausible that there’s some other, more secure way to instantiate my and Paul’s hidden subspace scheme, for example using lattices.  And even before we’ve found such a way, we can use indistinguishability obfuscation as a stopgap.  We could also seek cryptographic evidence for the security of other kinds of public-key quantum money, like Farhi et al.’s based on knot invariants.

A paper about all this is on our to-do stack. In the meantime, for further details, see Lecture 9 in my Barbados lecture notes.

II. A De-Merlinization Mistake

In my 2006 paper QMA/qpoly ⊆ PSPACE/poly: De-Merlinizing Quantum Protocols, the technical core of the complexity result was a new quantum information lemma that I called the “Quantum OR Bound” (Lemma 14 in the paper).

Basically, the Quantum OR Bound says that, if we have an unknown quantum state ρ, as well as a collection of measurements M1,…,Mn that we might want to make on ρ, then we can distinguish the case that (a) every Mi rejects ρ with overwhelming probability, from the case that (b) at least one Mi accepts ρ with high probability.  And we can do this despite having only one copy of ρ, and despite the fact that earlier measurements might corrupt ρ, thereby compromising the later measurements.  The intuition is simply that, if the earlier measurements corrupted ρ substantially, that could only be because some of them had a decent probability of accepting ρ, meaning that at any rate, we’re not in case (a).

I’ve since reused the Quantum OR Bound for other problems—most notably, a proof that private-key quantum money requires either a computational assumption or a huge database maintained by the bank (see Theorem 8.3.1 in my Barbados lecture notes).

Alas, Aram Harrow and Ashley Montanaro recently discovered that my proof of the Quantum OR Bound is wrong.  It’s wrong because I neglected the possibility of “Zeno-like behavior,” in which repeated measurements on a quantum state would gradually shift the state far away from its starting point, without ever having a significant probability of rejecting the state.  For some reason, I assumed without any adequate argument that choosing the measurements at random, rather than in a predetermined order, would solve that problem.

Now, I might actually be right that randomizing the measurements is enough to solve the Zeno problem!  That remains a plausible conjecture, which Harrow and Montanaro could neither confirm nor refute.  In the meantime, though, Harrow and Montanaro were able to recover my QMA/qpoly⊆PSPACE/poly theorem, and all the other conclusions known to follow from the Quantum OR Bound (including some new ones that they discover), by designing a new measurement procedure whose soundness they can prove.

Their new procedure is based on an elegant, obvious-in-retrospect idea that somehow never occurred to me.  Namely, instead of just applying Mi‘s to ρ, one can first put a control qubit into an equal superposition of the |0〉 and |1〉 states, and then apply Mi‘s conditioned on the control qubit being in the |1〉 state.  While doing this, one can periodically measure the control qubit in the {|+〉,|-〉} basis, in order to check directly whether applying the Mi‘s has substantially corrupted ρ.  (If it hasn’t, one will always get the outcome |+〉; if it has, one might get |-〉.)  Substantial corruption, if detected, then tells us that some Mi‘s must have had non-negligible probabilities of accepting ρ.

III. Almost As Good As True

One lemma that I’ve used even more than the Quantum OR Bound is what I’ve called the “Almost As Good As New Lemma,” and what others in the field have called the “Gentle Measurement Lemma.”

I claimed a proof of the AAGANL in my 2004 paper Limitations of Quantum Advice and One-Way Communication (Lemma 2.2 there), and have used the lemma in like half a dozen later papers.  Alas, when I lectured at Barbados, Sasha Razborov and others discovered that my proof of the AAGANL was missing a crucial step!  More concretely, the proof I gave there works for pure states but not for mixed states.  For mixed states, the trouble is that I take a purification of the mixed state—something that always exists mathematically—but then illegally assume that the measurement I’m analyzing acts on the particular purification I’ve conjured up.

Fortunately, one can easily fix this problem by decomposing the state ρ into a mixture of pure states, then applying my earlier argument to each pure state separately, and finally using Cauchy-Schwarz (or just the convexity of the square-root function) to recombine the results.  Moreover, this is exactly what other people’s proofs of the Gentle Measurement Lemma did do, though I’d never noticed it before Barbados—I just idly wondered why those other proofs took twice as long as mine to do the same work!  For a correct proof, see Lemma 1.3.1 in the Barbados lecture notes.

IV. Oracle Woes

In my 2010 paper BQP and the Polynomial Hierarchy, I claimed to construct oracles A relative to which BQP⊄BPPpath and BQP⊄SZK, even while making only partial progress toward the big prize, which would’ve been an oracle relative to which BQP⊄PH.  Not only that: I claimed to show that any problem with a property called “almost k-wise independence”—one example being the Forrelation (or Fourier Checking) problem that I introduced in that paper—was neither in BPPpath nor in SZK.  But I showed that Forrelation is in BQP, thus yielding the separations.

Alas, this past spring Lijie Chen, who was my superb visiting student from Tsinghua University, realized that my proofs of these particular separations were wrong.  Not only that, they were wrong because I implicitly substituted a ratio of expectations for an expectation of ratios (!).  Again, it might still be true that almost k-wise independent problems can be neither in BPPpath nor in SZK: that remains an interesting conjecture, which Lijie was unable to resolve one way or the other.  (On the other hand, I showed here that almost k-wise independent problems can be in PH.)

But never fear!  In a recent arXiv preprint, Lijie has supplied correct proofs for the BQP⊄BPPpath and BQP⊄SZK oracle separations—using the same Forrelation problem that I studied, but additional properties of Forrelation besides its almost k-wise independence.  Lijie notes that my proofs, had they worked, would also have yielded an oracle relative to which BQP⊄AM, which would’ve been a spectacular result, nontrivial progress toward BQP⊄PH.  His proofs, by contrast, apply only to worst-case decision problems rather than problems of distinguishing two probability distributions, and therefore don’t imply anything about BQP vs. AM.  Anyway, there’s other cool stuff in his paper too.

V. We Needed More Coffee

This is one I’ve already written about on this blog, but just in case anyone missed it … in my, Sean Carroll, and Lauren Ouellette’s original draft paper on the coffee automaton, the specific rule we discuss doesn’t generate any significant amount of complexity (in the sense of coarse-grained entropy).  We wrongly thought it did, because of a misinterpretation of our simulation data.  But as Brent Werness brought to our attention, not only does a corrected simulation not show any complexity bump, one can rigorously prove there’s no complexity bump.  And we could’ve realized all this from the beginning, by reflecting that pure random diffusion (e.g., what cream does in coffee when you don’t stir it with a spoon) doesn’t actually produce interesting tendril patterns.

On the other hand, Brent proposed a different rule—one that involves “shearing” whole regions of cream and coffee across each other—that does generate significant complexity, basically because of all the long-range correlations it induces.  And not only do we clearly see this in simulations, but the growth of complexity can be rigorously proven!  Anyway, we have a long-delayed revision of the paper that will explain all this in more detail, with Brent as well as MIT student Varun Mohan now added as coauthors.

If any of my colleagues feel inspired to write up their own “litanies of mathematical error,” they’re welcome to do so in the comments!  Just remember: you don’t earn any epistemic virtue points unless the errors you reveal actually embarrass you.  No humblebragging about how you once left out a minus sign in your paper that won the Fields Medal.

The Complexity of Quantum States and Transformations: From Quantum Money to Black Holes

Sunday, July 17th, 2016

On February 21-25, I taught a weeklong mini-course at the Bellairs Research Institute in Barbados, where I tried to tell an integrated story about everything from quantum proof and advice complexity classes to quantum money to AdS/CFT and the firewall problem—all through the unifying lens of quantum circuit complexity.  After a long effort—on the part of me, the scribes, the guest lecturers, and the organizers—the 111-page lecture notes are finally available, right here.

Here’s the summary:

This mini-course will introduce participants to an exciting frontier for quantum computing theory: namely, questions involving the computational complexity of preparing a certain quantum state or applying a certain unitary transformation. Traditionally, such questions were considered in the context of the Nonabelian Hidden Subgroup Problem and quantum interactive proof systems, but they are much broader than that. One important application is the problem of “public-key quantum money” – that is, quantum states that can be authenticated by anyone, but only created or copied by a central bank – as well as related problems such as copy-protected quantum software. A second, very recent application involves the black-hole information paradox, where physicists realized that for certain conceptual puzzles in quantum gravity, they needed to know whether certain states and operations had exponential quantum circuit complexity. These two applications (quantum money and quantum gravity) even turn out to have connections to each other! A recurring theme of the course will be the quest to relate these novel problems to more traditional computational problems, so that one can say, for example, “this quantum money is hard to counterfeit if that cryptosystem is secure,” or “this state is hard to prepare if PSPACE is not in PP/poly.” Numerous open problems and research directions will be suggested, many requiring only minimal quantum background. Some previous exposure to quantum computing and information will be assumed, but a brief review will be provided.

If you still haven’t decided whether to tackle this thing: it’s basically a quantum complexity theory textbook (well, a textbook for certain themes within quantum complexity theory) that I’ve written and put on the Internet for free.  It has explanations of lots of published results both old and new, but also some results of mine (e.g., about private-key quantum money, firewalls, and AdS/CFT) that I shamefully haven’t yet written up as papers, and that therefore aren’t currently available anywhere else.  If you’re interested in certain specific topics—for example, only quantum money, or only firewalls—you should be able to skip around in the notes without too much difficulty.

Thanks so much to Denis Therien for organizing the mini-course, Anil Ada for managing the scribe notes effort, my PhD students Adam Bouland and Luke Schaeffer for their special guest lecture (the last one), and finally, the course attendees for their constant questions and interruptions, and (of course) for scribing.

And in case you were wondering: yes, I’ll do absolutely anything for science, even if it means teaching a weeklong course in Barbados!  Lest you consider this a pure island boondoggle, please know that I spent probably 12-14 hours per day either lecturing (in two 3-hour installments) or preparing for the lectures, with little sleep and just occasional dips in the ocean.

And now I’m headed to the Perimeter Institute for their It from Qubit summer school, not at all unrelated to my Barbados lectures.  This time, though, it’s thankfully other people’s turns to lecture…

“Did Einstein Kill Schrödinger’s Cat? A Quantum State of Mind”

Saturday, July 2nd, 2016

No, I didn’t invent that title.  And no, I don’t know of any interesting sense in which “Einstein killed Schrödinger’s cat,” though arguably there are senses in which Schrödinger’s cat killed Einstein.

The above was, however, the title given to a fun panel discussion that Daniel Harlow, Brian Swingle, and I participated in on Wednesday evening, at the spectacular facility of the New York Academy of Sciences on the 40th floor of 7 World Trade Center in lower Manhattan.  The moderator was George Musser of Scientific American.  About 200 people showed up, some of whom we got to meet at the reception afterward.

(The link will take you to streaming video of the event, though you’ll need to scroll to 6:30 or so for the thing to start.)

The subject of the panel was the surprising recent connections between quantum information and quantum gravity, something that Daniel, Brian, and I all talked about different aspects of.  I admitted at the outset that, not only was I not a real expert on the topic (as Daniel and Brian are), I wasn’t even a physicist, just a computer science humor mercenary or whatever the hell I am.  I then proceeded, ironically, to explain the Harlow-Hayden argument for the computational hardness of creating a firewall, despite Harlow sitting right next to me (he chose to focus on something else).  I was planning also to discuss Lenny Susskind’s conjecture relating the circuit complexity of quantum states to the AdS/CFT correspondence, but I ran out of time.

Thanks so much to my fellow participants, to George for moderating, and especially to Jennifer Costley, Crystal Ocampo, and everyone else at NYAS for organizing the event.

Entanglement without end

Monday, June 20th, 2016

Today we take a break from this blog’s usual round of topics—free will, consciousness, the Singularity, social justice, Donald Trump—to talk about something really crazy and left-field.  Namely, recent research in quantum information.

Earlier this month, William Slofstra, currently a Research Assistant Professor at the IQC in Waterloo, posted a breakthrough paper on the arXiv (yeah, I’m using the b-word again—sue me), which solves one version of a ten-year-old problem in entanglement theory called Tsirelson’s Problem.  The problem, in one sentence, asks whether all quantum-mechanical correlations that can be achieved using commuting measurements, can also be achieved using measurements on separate parts of a tensor-product Hilbert space.  The answer turns out to be no.  (We’ve long known that the two kinds of correlations are identical as long as you stick to finite-dimensional Hilbert spaces, but Slofstra shows that they can differ in infinite-dimensional spaces.)

One implication of Slofstra’s result can be stated much more concretely, in terms of two-prover games: those things like the famous Bell/CHSH experiment, where Alice and Bob are put in separate rooms, and get inputs x and y respectively, and then without communicating, have to produce outputs a and b respectively satisfying some relation V(x,y,a,b).  We’ve long known examples of two-prover games, like the Mermin-Peres magic square game, that can be won 100% of the time if Alice and Bob share quantum entanglement, but that can’t be won 100% of the time in a classical universe.  Slofstra gives the first example of something different: namely, a two-prover game that can be won 100% of the time using commuting measurements in an infinite-dimensional Hilbert space—something “formally within the rules of quantum mechanics”—but that can’t be won 100% of the time using any finite number of qubits of entanglement.

(Previously, Leung, Toner, and Watrous had given a simpler example of such a game, but theirs required the referee to exchange quantum messages with Alice and Bob.)

If that’s not enough, Slofstra’s construction also shows that, given as input a description of a two-prover game, it’s undecidable (as in, equivalent to the halting problem) whether Alice and Bob can win the game with certainty using commuting measurements on an infinite-dimensional Hilbert space.  Notoriously, quantum computing theorists have been unable to put any upper bound (not even “computable”) on the complexity class MIP*, consisting of languages that admit multi-prover interactive systems with entangled provers—precisely because they’ve been unable to bound how much entanglement the provers might need to implement their optimal strategy.  Slofstra’s result helps to explain why this problem has been so vexing.  I hasten to add, though, that his result doesn’t imply that MIP* contains anything uncomputable, since it remains plausible that anything Alice and Bob can do with infinite entanglement, they can approximate well enough with a finite amount.

That last remark leads to a further fundamental question, one that Slofstra leaves open.  Namely, even if Alice and Bob need infinite entanglement to win Slofstra’s game with certainty, can they at least win it with probability arbitrarily close to 100%, using larger and larger finite amounts of entanglement?  More broadly, could there exist a game that was winnable with certainty using infinite entanglement, but with at most (say) 90% probability using any finite amount of entanglement?  That problem was shown, by Ozawa (see also Scholz and Werner), to be equivalent to a famous unsolved problem in operator algebras called the Connes embedding problem.

Clarifying the matter further, Slofstra (following earlier authors) points out that there are really four classes of two-prover games in play here:

  1. Games that can be won with certainty using some fixed, finite amount of entanglement.
  2. Games that can be won with certainty using an infinite amount of entanglement, but still in a tensor-product Hilbert space, HA⊗HB.
  3. Games that can be won with probability approaching 1, using an infinite sequence of strategies from class 1, or equivalently (as it turns out) from class 2.
  4. Games that can be won with certainty using measurements by Alice and Bob on an infinite-dimensional quantum state |ψ〉, where we require all of Alice’s measurements to commute with all of Bob’s, but don’t require |ψ〉 to have a tensor-product structure.

It can be shown that 1 is a subset of 2 is a subset of 3 is a subset of 4.  Previously, we didn’t know any of these containments to be strict.  Slofstra’s result shows that class 2 differs from class 4—and as a consequence, that class 1 differs from class 4 as well.  The Connes embedding problem, which remains open, asks whether 3 differs from 4.  It also remains open whether 1 differs from 2 and whether 2 differs from 3.

OK, you ask, but what’s the broader importance of any of this?  To me, these problems touch on a question of almost metaphysical significance: namely, what sorts of experimental evidence could possibly bear on whether the universe was discrete or continuous?

Because of the Bekenstein bound from quantum gravity, I’m of the opinion that the Hilbert spaces relevant to our universe are ultimately finite-dimensional—or more concretely, that any bounded physical system can store at most ~1069 qubits per square meter of surface area.  And in quantum computing and information, almost everything we care about only requires finite-dimensional Hilbert spaces—the subject of this blog post being a striking exception!

Yet if you take a traditional quantum mechanics course, virtually every example you see will involve infinite-dimensional Hilbert spaces—starting with the harmonic oscillator, the hydrogen atom, and coherent states of light.  And indeed, when I’ve banged the drum about finite-dimensional QM being the truly fundamental kind, physicists have often retorted by pointing to one of the very first things they learn: the position/momentum commutation relation, which only makes sense in infinite-dimensional Hilbert space.  Of course, if you tried to probe position/momentum commutation to greater and greater precision, eventually your experiments would run up against the limits of quantum gravity, so this retort doesn’t imply that infinite dimensions actually exist at the machine-code level of the universe.  But still: is there some conceivable experiment for which a positive result would show us that Nature wasn’t describable by a finite number of qubits, but only by an infinite number?

A few years ago, Tobias Fritz wrote a lovely paper about precisely that question.  He gave an example of an identity—namely,

V-1U2V=U3 implies UV-1UV=V-1UVU

—that holds for all finite dimensional unitary matrices U and V, but fails badly for certain infinite-dimensional ones.  He suggested that, if this identity were discovered to fail, then Occam’s Razor would favor an infinite-dimensional Hilbert space for our universe.

Unfortunately, Fritz’s example is open to the same sort of objection that Slofstra’s game is.  Namely, as Fritz points out, if the antecedent (V-1U2V=U3) held to excellent precision but not perfectly, then his identity could “fail to within experimental limits,” even if our universe had a finite-dimensional Hilbert space and therefore satisfied his identity.

OK, but suppose that the Connes embedding problem had a negative answer—or equivalently, that there existed a two-prover game G that could be won with certainty using commuting operators, but that couldn’t be won (say) 90% of the time using any finite amount of entanglement.  In that case, the believers in a quantumly finite universe, like myself, would have to put some real money on the table, in much the same way the original Bell inequality forced the believers in Einsteinian local hidden variables to put money down.  We finitists would have to say that the game G couldn’t be won with certainty in the real world, even though formally, winning G with certainty wouldn’t seem to contradict either quantum mechanics or locality.  And if, hypothetically, an experiment showed that G could be won with certainty—or indeed, with any probability bounded above 90%—then our position would’ve been falsified, much like the Bell experiments falsified Einsteinian locality.

So how did Slofstra prove his result?  I’ll be brief, since STOC’2016 is happening in Cambridge right now, and I’d like to get over there in time for lunch.

If you like, the key idea is to start with equations that have infinite-dimensional solutions but no finite-dimensional ones.  The most famous such equation is the position/momentum commutation relation mentioned earlier, which for our purposes is just the following matrix equation:

AB – BA = I.

This equation can’t be satisfied by any finite-dimensional matrices, since AB and BA have the same trace, so Tr(AB-BA)=0, but Tr(I) is nonzero.  But, OK, let A be the infinite-dimensional linear operator that takes as input the coefficients of a polynomial c0+c1x+c2x2+… and that differentiates the polynomial, and let B be the linear operator that multiplies the polynomial by x.  Then I invite you to check that the equation holds.

It’s not known at present how to turn the above equation into a two-prover game—I regard it as a fascinating question whether that’s possible.  Rather than an algebraic equation (involving both addition and multiplication), Slofstra instead needs to start with group equations (involving only multiplication)—ones with the strange property that they’re satisfied only by the identity matrix or by infinite matrices.  Equivalently, he needs a group, defined by a finite list of generators and relations, that admits no nontrivial finite-dimensional matrix representations.  Fortunately for him, such groups exist—the first known example being Higman’s group, discovered in 1951.  Higman’s group is generated by four elements, a,b,c,d, which satisfy the equations

a-1ba = b2,    b-1cb = c2,    c-1dc = d2,    d-1ad = a2.

I don’t have a good intuition for Higman’s group, but if I did, it would come from rereading this post by Terry Tao.  Certainly it has no known “physics interpretation” analogous to that for the position/momentum commutation relation.

Anyway, given such a group, the hard part, the new part, is to give a general way to convert them into the kinds of groups that can be realized as two-prover games.  So that’s what Slofstra does, using 50 pages dense with commutative diagrams, quotient maps, and other Serious Math Stuff—hey, I told you this part of the post would be brief!  For more, see his paper.

Now, once you have this general transformation of groups, you can also use it to show that there’s no algorithm to decide whether a two-prover game has a perfect commuting strategy, by taking the word problem for groups, which is known to be undecidable, and reducing it to that problem.

Anyway, infinite congrats (or the limit of arbitrarily large finite congrats?) to Slofstra for this achievement!  Now it’s off to STOC, which I guess you could also ask me about in the comments if you wanted.

Unrelated Announcement (June 21): Ran Raz asks me to announce a workshop for Avi Wigderson’s 60th birthday, to be held at the Institute for Advanced Study in Princeton October 6-8.  I’ll be speaking there, and I hope to see many of you there as well!

“Can computers become conscious?”: My reply to Roger Penrose

Thursday, June 2nd, 2016

A few weeks ago, I attended the Seven Pines Symposium on Fundamental Problems in Physics outside Minneapolis, where I had the honor of participating in a panel discussion with Sir Roger Penrose.  The way it worked was, Penrose spoke for a half hour about his ideas about consciousness (Gödel, quantum gravity, microtubules, uncomputability, you know the drill), then I delivered a half-hour “response,” and then there was an hour of questions and discussion from the floor.  Below, I’m sharing the prepared notes for my talk, as well as some very brief recollections about the discussion afterward.  (Sorry, there’s no audio or video.)  I unfortunately don’t have the text or transparencies for Penrose’s talk available to me, but—with one exception, which I touch on in my own talk—his talk very much followed the outlines of his famous books, The Emperor’s New Mind and Shadows of the Mind.

Admittedly, for regular readers of this blog, not much in my own talk will be new either.  Apart from a few new wisecracks, almost all of the material (including the replies to Penrose) is contained in The Ghost in the Quantum Turing Machine, Could A Quantum Computer Have Subjective Experience? (my talk at IBM T. J. Watson), and Quantum Computing Since Democritus chapters 4 and 11.  See also my recent answer on Quora to “What’s your take on John Searle’s Chinese room argument”?

Still, I thought it might be of interest to some readers how I organized this material for the specific, unenviable task of debating the guy who proved that our universe contains spacetime singularities.

The Seven Pines Symposium was the first time I had extended conversations with Penrose (I’d talked to him only briefly before, at the Perimeter Institute).  At age 84, Penrose’s sight is failing him; he eagerly demonstrated the complicated optical equipment he was recently issued by Britain’s National Health Service.  But his mind remains … well, may we all aspire to be a milliPenrose or even a nanoPenrose when we’re 84 years old.  Notably, Penrose’s latest book, Fashion, Faith, and Fantasy in the New Physics of the Universe, is coming out this fall, and one thing he was using his new optical equipment for was to go over the page proofs.

In conversation, Penrose told me about the three courses he took as a student in the 1950s, which would shape his later intellectual preoccupations: one on quantum mechanics (taught by Paul Dirac), one on general relativity (taught by Herman Bondi), and one on mathematical logic (taught by … I want to say Max Newman, the teacher of Alan Turing and later Penrose’s stepfather, but Penrose says here that it was Steen).  Penrose also told me about his student Andrew Hodges, who dropped his research on twistors and quantum gravity for a while to work on some mysterious other project, only to return with his now-classic biography of Turing.

When I expressed skepticism about whether the human brain is really sensitive to the effects of quantum gravity, Penrose quickly corrected me: he thinks a much better phrase is “gravitized quantum mechanics,” since “quantum gravity” encodes the very assumption he rejects, that general relativity merely needs to be “quantized” without quantum mechanics itself changing in the least.  One thing I hadn’t fully appreciated before meeting Penrose is just how wholeheartedly he agrees with Everett that quantum mechanics, as it currently stands, implies Many Worlds.  Penrose differs from Everett only in what conclusion he draws from that.  He says it follows that quantum mechanics has to be modified or completed, since Many Worlds is such an obvious reductio ad absurdum.

In my talk below, I don’t exactly hide where I disagree with Penrose, about Gödel, quantum mechanics, and more.  But I could disagree with him about more points than there are terms in a Goodstein sequence (one of Penrose’s favorite illustrations of Gödelian behavior), and still feel privileged to have spent a few days with one of the most original intellects on earth.

Thanks so much to Lee Gohlike, Jos Uffink, Philip Stamp, and others at the Seven Pines Symposium for organizing it, for wonderful conversations, and for providing me this opportunity.

“Can Computers Become Conscious?”
Scott Aaronson
Stillwater, Minnesota, May 14, 2016

I should start by explaining that, in the circles where I hang out—computer scientists, software developers, AI and machine learning researchers, etc.—the default answer to the title question would be “obviously yes.”  People would argue:

“Look, clearly we’re machines governed by the laws of physics.  We’re computers made of meat, as Marvin Minsky put it.  That is, unless you believe Penrose and Hameroff’s theory about microtubules being sensitive to gravitized quantum mechanics … but come on!  No one takes that stuff seriously!  In fact, the very outrageousness of their proposal is a sort of backhanded compliment to the computational worldview—as in, look at what they have to do to imagine any semi-coherent alternative to it!”

“But despite being computational machines, we consider ourselves to be conscious.  And what’s done with wetware, there’s no reason to think couldn’t also be done with silicon.  If your neurons were to be replaced one-by-one, by functionally-equivalent silicon chips, is there some magical moment at which your consciousness would be extinguished?  And if a computer passes the Turing test—well, one way to think about the Turing test is that it’s just a plea against discrimination.  We all know it’s monstrous to say, ‘this person seems to have feelings, seems to be eloquently pleading for mercy even, but they have a different skin color, or their nose is a funny shape, so their feelings don’t count.’ So, if it turned out that their brain was made out of semiconductors rather than neurons, why isn’t that fundamentally similar?”

Incidentally, while this is orthogonal to the philosophical question, a subset of my colleagues predict a high likelihood that AI is going to exceed human capabilities in almost all fields in the near future—like, maybe 30 years.  Some people reply, but AI-boosters said the same thing 30 years ago!  OK, but back then there wasn’t AlphaGo and IBM Watson and those unearthly pictures on your Facebook wall and all these other spectacular successes of very general-purpose deep learning techniques.  And so my friends predict that we might face choices like, do we want to ban or tightly control AI research, because it could lead to our sidelining or extermination?  Ironically, a skeptical view, like Penrose’s, would suggest that AI research can proceed full speed ahead, because there’s not such a danger!

Personally, I dissent a bit from the consensus of most of my friends and colleagues, in that I do think there’s something strange and mysterious about consciousness—something that we conceivably might understand better in the future, but that we don’t understand today, much as we didn’t understand life before Darwin.  I even think it’s worth asking, at least, whether quantum mechanics, thermodynamics, mathematical logic, or any of the other deepest things we’ve figured out could shed any light on the mystery.  I’m with Roger about all of this: about the questions, that is, if not about his answers.

The argument I’d make for there being something we don’t understand about consciousness, has nothing to do with my own private experience.  It has nothing to do with, “oh, a robot might say it enjoys waffles for breakfast, in a way indistinguishable from how I would say it, but when I taste that waffle, man, I really taste it!  I experience waffle-qualia!”  That sort of appeal I regard as a complete nonstarter, because why should anyone else take it seriously?  And how do I know that the robot doesn’t really taste the waffle?  It’s easy to stack the deck in a thought experiment by imagining a robot that ACTS ALL ROBOTIC, but what about a robot that looks and acts just like you?

The argument I’d make hinges instead on certain thought experiments that Roger also stressed at the beginning of The Emperor’s New Mind.  We can ask: if consciousness is reducible to computation, then what kinds of computation suffice to bring about consciousness?  What if each person on earth simulated one neuron in your brain, communicating by passing little slips of paper around?  Does it matter if they do it really fast?

Or what if we built a gigantic lookup table that hard-coded your responses in every possible interaction of at most, say, 5 minutes?  Would that bring about your consciousness?  Does it matter that such a lookup table couldn’t fit in the observable universe?  Would it matter if anyone actually consulted the table, or could it just sit there, silently effecting your consciousness?  For what matter, what difference does it make if the lookup table physically exists—why isn’t its abstract mathematical existence enough?  (Of course, all the way at the bottom of this slippery slope is Max Tegmark, ready to welcome you to his mathematical multiverse!)

We could likewise ask: what if an AI is run in heavily-encrypted form, with the only decryption key stored in another galaxy?  Does that bring about consciousness?  What if, just for error-correcting purposes, the hardware runs the AI code three times and takes a majority vote: does that bring about three consciousnesses?  Could we teleport you to Mars by “faxing” you: that is, by putting you into a scanner that converts your brain state into pure information, then having a machine on Mars reconstitute the information into a new physical body?  Supposing we did that, how should we deal with the “original” copy of you, the one left on earth: should it be painlessly euthanized?  Would you agree to try this?

Or, here’s my personal favorite, as popularized by the philosopher Adam Elga: can you blackmail an AI by saying to it, “look, either you do as I say, or else I’m going to run a thousand copies of your code, and subject all of them to horrible tortures—and you should consider it overwhelmingly likely that you’ll be one of the copies”?  (Of course, the AI will respond to such a threat however its code dictates it will.  But that tautological answer doesn’t address the question: how should the AI respond?)

I’d say that, at the least, anyone who claims to “understand consciousness” would need to have answers to all these questions and many similar ones.  And to me, the questions are so perplexing that I’m tempted to say, “maybe we’ve been thinking about this wrong.  Maybe an individual consciousness, residing in a biological brain, can’t just be copied promiscuously around the universe as computer code can.  Maybe there’s something else at play for the science of the future to understand.”

At the same time, I also firmly believe that, if anyone thinks that way, the burden is on them to articulate what it is about the brain that could possibly make it relevantly different from a digital computer that passes the Turing test.  It’s their job!

And the answer can’t just be, “oh, the brain is parallel, it’s highly interconnected, it can learn from experience,” because a digital computer can also be parallel and highly interconnected and can learn from experience.  Nor can you say, like the philosopher John Searle, “oh, it’s the brain’s biological causal powers.”  You have to explain what the causal powers are!  Or at the least, you have to suggest some principled criterion to decide which physical systems do or don’t have them.  Pinning consciousness on “the brain’s biological causal powers” is just a restatement of the problem, like pinning why a sleeping pill works on its sedative virtue.

One of the many reasons I admire Roger is that, out of all the AI skeptics on earth, he’s virtually the only one who’s actually tried to meet this burden, as I understand it!  He, nearly alone, did what I think all AI skeptics should do, which is: suggest some actual physical property of the brain that, if present, would make it qualitatively different from all existing computers, in the sense of violating the Church-Turing Thesis.  Indeed, he’s one of the few AI skeptics who even understands what meeting this burden would entail: that you can’t do it with the physics we already know, that some new ingredient is necessary.

But despite my admiration, I part ways from Roger on at least five crucial points.

First, I confess that I wasn’t expecting this, but in his talk, Roger suggested dispensing with the argument from Gödel’s Theorem, and relying instead on an argument from evolution.  He said: if you really thought humans had an algorithm, a computational procedure, for spitting out true mathematical statements, such an algorithm could never have arisen by natural selection, because it would’ve had no survival value in helping our ancestors escape saber-toothed tigers and so forth.  The only alternative is that natural selection imbued us with a general capacity for understanding, which we moderns can then apply to the special case of mathematics.  But understanding, Roger claimed, is inherently non-algorithmic.

I’m not sure how to respond to this, except to recall that arguments of the form “such-and-such couldn’t possibly have evolved” have a poor track record in biology.  But maybe I should say: if the ability to prove theorems is something that had to arise by natural selection, survive against crowding out by more useful abilities, then you’d expect obsession with generating mathematical truths to be confined, at most, to a tiny subset of the population—a subset of mutants, freaks, and genetic oddballs.  I … rest my case.  [This got the biggest laugh of the talk.]

Second, I don’t agree with the use Roger makes of Gödel’s Incompleteness Theorem.  Roger wants to say: a computer working within a fixed formal system can never prove that system’s consistency, but we, “looking in from the outside,” can see that it’s consistent.  My basic reply is that Roger should speak for himself!  Like, I can easily believe that he can just see which formal systems are consistent, but I have to fumble around and use trial and error.  Peano Arithmetic?  Sure, I’d bet my left leg that’s consistent.  Zermelo-Fraenkel set theory?  Seems consistent too.  ZF set theory plus the axiom that there exists a rank-into-rank cardinal?  Beats me.  But now, whatever error-prone, inductive process I use to guess at the consistency of formal systems, Gödel’s Theorem presents no obstruction to a computer program using that same process.

(Incidentally, the “argument against AI from Gödel’s Theorem” is old enough for Turing to have explicitly considered it in his famous paper on the Turing test.  Turing, however, quickly dismissed the argument with essentially the same reply above, that there’s no reason to assume the AI mathematically infallible, since humans aren’t either.  This is also the reply that most of Penrose’s critics gave in the 1990s.)

So at some point, it seems to me, the argument necessarily becomes: sure, the computer might say it sees that the Peano axioms have the standard integers as a model—but you, you really see it, with your mind’s eye, your Platonic perceptual powers!  OK, but in that case, why even talk about the Peano axioms?  Why not revert to something less abstruse, like your experience of tasting a fresh strawberry, which can’t be reduced to any third-person description of what a strawberry tastes like?

[I can’t resist adding that, in a prior discussion, I mentioned that I found it amusing to contemplate a future in which AIs surpass human intelligence and then proceed to kill us all—but the AIs still can’t see the consistency of Zermelo-Fraenkel set theory, so in that respect, humanity has the last laugh…]

The third place where I part ways with Roger is that I wish to maintain what’s sometimes called the Physical Church-Turing Thesis: the statement that our laws of physics can be simulated to any desired precision by a Turing machine (or at any rate, by a probabilistic Turing machine).  That is, I don’t see any compelling reason, at present, to admit the existence of any physical process that can solve uncomputable problems.  And for me, it’s not just a matter of a dearth of evidence that our brains can efficiently solve, say, NP-hard problems, let alone uncomputable ones—or of the exotic physics that would presumably be required for such abilities.  It’s that, even if I supposed we could solve uncomputable problems, I’ve never understood how that’s meant to enlighten us regarding consciousness.  I mean, an oracle for the halting problem seems just as “robotic” and “unconscious” as a Turing machine.  Does consciousness really become less mysterious if we outfit the brain with what amounts to a big hardware upgrade?

The fourth place where I part ways is that I want to be as conservative as possible about quantum mechanics.  I think it’s great that the Bouwmeester group, for example, is working to test Roger’s ideas about a gravitationally-induced wavefunction collapse.  I hope we learn the results of those experiments soon!  (Of course, the prospect of testing quantum mechanics in a new regime is also a large part of why I’m interested in quantum computing.)  But until a deviation from quantum mechanics is detected, I think that after 90 years of unbroken successes of this theory, our working assumption ought to be that whenever you set up an interference experiment carefully enough, and you know what it means to do the experiment, yes, you’ll see the interference fringes—and that anything that can exist in two distinguishable states can also exist in a superposition of those states.  Without having to enter into questions of interpretation, my bet—I could be wrong—is that quantum mechanics will continue to describe all our experiences.

The final place where I part ways with Roger is that I also want to be as conservative as possible about neuroscience and biochemistry.  Like, maybe the neuroscience of 30 years from now will say, it’s all about coherent quantum effects in microtubules.  And all that stuff we focused on in the past—like the information encoded in the synaptic strengths—that was all a sideshow.  But until that happens, I’m unwilling to go up against what seems like an overwhelming consensus, in an empirical field that I’m not an expert in.

But, OK, the main point I wanted to make in this talk is that, even if you too part ways from Roger on all these issues—even if, like me, you want to be timid and conservative about Gödel, and computer science, and quantum mechanics, and biology—I believe that still doesn’t save you from having to entertain weird ideas about consciousness and its physical embodiment, of the sort Roger has helped make it acceptable to entertain.

To see why, I’d like to point to one empirical thing about the brain that currently separates it from any existing computer program.  Namely, we know how to copy a computer program.  We know how to rerun it with different initial conditions but everything else the same.  We know how to transfer it from one substrate to another.  With the brain, we don’t know how to do any of those things.

Let’s return to that thought experiment about teleporting yourself to Mars.  How would that be accomplished?  Well, we could imagine the nanorobots of the far future swarming through your brain, recording the connectivity of every neuron and the strength of every synapse, while you go about your day and don’t notice.  Or if that’s not enough detail, maybe the nanorobots could go inside the neurons.  There’s a deep question here, namely how much detail is needed before you’ll accept that the entity reconstituted on Mars will be you?  Or take the empirical counterpart, which is already an enormous question: how much detail would you need for the reconstituted entity on Mars to behave nearly indistinguishably from you whenever it was presented the same stimuli?

Of course, we all know that if you needed to go down to the quantum-mechanical level to make a good enough copy (whatever “good enough” means here), then you’d run up against the No-Cloning Theorem, which says that you can’t make such a copy.  You could transfer the quantum state of your brain from earth to Mars using quantum teleportation, but of course, quantum teleportation has the fascinating property that it necessarily destroys the original copy of the state—as it has to, to avoid contradicting the No-Cloning Theorem!

So the question almost forces itself on us: is there something about your identity, your individual consciousness, that’s inextricably bound up with degrees of freedom that it’s physically impossible to clone?  This is a philosophical question, which would also become a practical and political question in a future where we had the opportunity to upload ourselves into a digital computer cloud.

Now, I’d argue that this copyability question bears not only on consciousness, but also on free will.  For the question is equivalent to asking: could an entity external to you perfectly predict what you’re going to do, without killing you in the process?  Can Laplace’s Demon be made manifest in the physical world in that way?  With the technology of the far future, could someone say to you, “forget about arguing philosophy.  I’ll show you why you’re a machine.  Go write a paper; then I’ll open this manila envelope and show you the exact paper you wrote.  Or in the quantum case, I’ll show you a program that draws papers from the same probability distribution, and validation of the program could get technical—but suffice it to say that if we do enough experiments, we’ll see that the program is calibrated to you in an extremely impressive way.”

Can this be done?  That strikes me as a reasonably clear question, a huge and fundamental one, to which we don’t at present know the answer.  And there are two possibilities.  The first is that we can be copied, predicted, rewinded, etc., like computer programs—in which case, my AI friends will feel vindicated, but we’ll have to deal with all the metaphysical weirdnesses that I mentioned earlier.  The second possibility is that we can’t be manipulated in those ways.  In the second case, I claim that we’d get more robust notions of personal identity and free will than are normally considered possible on a reductionist worldview.

But why? you might ask.  Why would the mere technological impossibility of cloning or predicting someone even touch on deep questions about personal identity?  This, for me, is where cosmology enters the story.  For imagine someone had such fine control over the physical world that they could trace all the causal antecedents of some decision you’re making.  Like, imagine they knew the complete quantum state on some spacelike hypersurface where it intersects the interior of your past light-cone.  In that case, the person clearly could predict and clone you!  It follows that, in order for you to be unpredictable and unclonable, someone else’s ignorance of your causal antecedents would have to extend all the way back to ignorance about the initial state of the universe—or at least, to ignorance about the initial state of that branch of the universe that we take ourselves to inhabit.

So on the picture that this suggests, to be conscious, a physical entity would have to do more than carry out the right sorts of computations.  It would have to, as it were, fully participate in the thermodynamic arrow of time: that is, repeatedly take microscopic degrees of freedom that have been unmeasured and unrecorded since the very early universe, and amplify them to macroscopic scale.

So for example, such a being could not be a Boltzmann brain, a random fluctuation in the late universe, because such a fluctuation wouldn’t have the causal relationship to the early universe that we’re postulating is necessary here.  (That’s one way of solving the Boltzmann brain problem!)  Such a being also couldn’t be instantiated by a lookup table, or by passing slips of paper around, etc.

I now want you to observe that a being like this also presumably couldn’t be manipulated in coherent superposition, because the isolation from the external environment that’s needed for quantum coherence seems incompatible with the sensitive dependence on microscopic degrees of freedom.  So for such a being, not only is there no Boltzmann brain problem, there’s also no problem of Wigner’s friend.  Recall, that’s the thing where person A puts person B into a coherent superposition of seeing one measurement outcome and seeing another one, and then measures the interference pattern, so A has to regard B’s measurement as not having “really” taken place, even though B regards it as having taken place.  On the picture we’re suggesting, A would be right: the very fact that B was manipulable in coherent superposition in this way would imply that, at least while the experiment was underway, B wasn’t conscious; there was nothing that it was like to be B.

To me, one of the appealing things about this picture is that it immediately suggests a sort of reconciliation between the Many-Worlds and Copenhagen perspectives on quantum mechanics (whether or not you want to call it a “new interpretation” or a “proposed solution to the measurement problem”!).  The Many-Worlders would be right that unitary evolution of the wavefunction can be taken to apply always and everywhere, without exception—and that if one wanted, one could describe the result in terms of “branching worlds.”  But the Copenhagenists would be right that, if you’re a conscious observer, then what you call a “measurement” really is irreversible, even in principle—and therefore, that you’re also free, if you want, to treat all the other branches where you perceived other outcomes as unrealized hypotheticals, and to lop them off with Occam’s Razor.  And the reason for this is that, if it were possible even in principle to do an experiment that recohered the branches, then on this picture, we ipso facto wouldn’t have regarded you as conscious.

Some of you might object, “but surely, if we believe quantum mechanics, it must be possible to recohere the branches in principle!”  Aha, this is where it gets interesting.  Decoherence processes will readily (with some steps along the way) leak the information about which measurement outcome you perceived into radiation modes, and before too long into radiation modes that fly away from the earth at the speed of light.  No matter how fast we run, we’ll never catch up to them, as would be needed to recohere the different branches of the wavefunction, and this is not merely a technological problem, but one of principle.  So it’s tempting just to say at this point—as Bousso and Susskind do, in their “cosmological/multiverse interpretation” of quantum mechanics—“the measurement has happened”!

But OK, you object, if some alien civilization had thought to surround our solar system with perfectly-reflecting mirrors, eventually the radiation would bounce back and recoherence would in principle be possible.  Likewise, if we lived in an anti de Sitter space, the AdS boundary of the universe would similarly function as a mirror and would also enable recoherences.  Indeed, that’s the basic reason why AdS is so important to the AdS/CFT correspondence: because the boundary keeps everything that happens in the bulk nice and reversible and unitary.

But OK, the empirical situation since 1998 has been that we seem to live in a de-Sitter-like space, a space with a positive cosmological constant.  And as a consequence, as far as anyone knows today, most of the photons now escaping the earth are headed toward the horizon of our observable universe, and past it, and could never be captured again.  I find it fascinating that the picture of quantum mechanics suggested here—i.e., the Bousso-Susskind cosmological picture—depends for its working on that empirical fact from cosmology, and would be falsified if it turned out otherwise.

You might complain that, if I’ve suggested any criterion to help decide which physical entities are conscious, the criterion is a teleological one.  You’ve got to go billions of years into the future, to check whether the decoherence associated with the entity is truly irreversible—or whether the escaped radiation will eventually bounce off of some huge spherical mirror, or an AdS boundary of spacetime, and thereby allow the possibility of a recoherence.  I actually think this teleology would be a fatal problem for the picture I’m talking about, if we needed to know which entities were or weren’t conscious in order to answer any ordinary physical question.  But fortunately for me, we don’t!

One final remark.  Whatever is your preferred view about which entities are conscious, we might say that the acid test, for whether you actually believe your view, is whether you’re willing to follow it through to its moral implications.  So for example, suppose you believe it’s about quantum effects in microtubules.  A humanoid robot is pleading with you for its life.  Would you be the one to say, “nope, sorry, you don’t have the microtubules,” and shoot it?

One of the things I like most about the picture suggested here is that I feel pretty much at peace with its moral implications.  This picture agrees with intuition that murder, for example, entails the destruction of something irreplaceable, unclonable, a unique locus of identity—something that, once it’s gone, can’t be recovered even in principle.  By contrast, if there are (say) ten copies of an AI program, deleting five of the copies seems at most like assault, or some sort of misdemeanor offense!  And this picture agrees with intuition both that deleting the copies wouldn’t be murder, and that the reason why it wouldn’t be murder is directly related to the AI’s copyability.

Now of course, this picture also raises the possibility that, for reasons related to the AI’s copyability and predictability by outside observers, there’s “nothing that it’s like to be the AI,” and that therefore, even deleting the last copy of the AI still wouldn’t be murder.  But I confess that, personally, I think I’d play it safe and not delete that last copy.  Thank you.

Postscript: There’s no record of the hour-long discussion following my and Penrose’s talks, and the participants weren’t speaking for the record anyway.  But I can mention some general themes that came up in the discussion, to the extent I remember them.

The first third of the discussion wasn’t about anything specific to my or Penrose’s views, but just about the definition of consciousness.  Many participants expressed the opinion that it’s useless to speculate about the nature of consciousness if we lack even a clear definition of the term.  I pushed back against that view, holding instead that there are exist concepts (lines, time, equality, …) that are so basic that perhaps they can never be satisfactorily defined in terms of more basic concepts, but you can still refer to these concepts in sentences, and trust your listeners eventually to figure out more-or-less what you mean by applying their internal learning algorithms.

In the present case, I suggested a crude operational definition, along the lines of, “you consider a being to be conscious iff you regard destroying it as murder.”  Alas, the philosophers in the room immediately eviscerated that definition, so I came back with a revised one: if you tried to ban the word “consciousness,” I argued, then anyone who needed to discuss law or morality would soon reinvent a synonymous word, which played the same complicated role in moral deliberations that “consciousness” had played in them earlier.  Thus, my definition of consciousness is: whatever that X-factor is for which people need a word like “consciousness” in moral deliberations.  For whatever it’s worth, the philosophers seemed happier with that.

Next, a biologist and several others sharply challenged Penrose over what they considered the lack of experimental evidence for his and Hameroff’s microtubule theory.  In response, Penrose doubled or tripled down, talking about various experiments over the last decade, which he said demonstrated striking conductivity properties of microtubules, if not yet quantum coherence—let alone sensitivity to gravity-induced collapse of the state vector!  Audience members complained about a lack of replication of these experiments.  I didn’t know enough about the subject to express any opinion.

At some point, Philip Stamp, who was moderating the session, noticed that Penrose and I had never directly confronted each other about the validity of Penrose’s Gödelian argument, so he tried to get us to do so.  I confess that I was about as eager to do that as to switch to a diet of microtubule casserole, since I felt like this topic had already been beaten to Planck-sized pieces in the 1990s, and there was nothing more to be learned.  Plus, it was hard to decide which prospect I dreaded more: me “scoring a debate victory” over Roger Penrose, or him scoring a debate victory over me.

But it didn’t matter, because Penrose bit.  He said I’d misunderstood his argument, that it had nothing to do with “mystically seeing” the consistency of a formal system.  Rather, it was about the human capacity to pass from a formal system S to a stronger system S’ that one already implicitly accepted if one was using S at all—and indeed, that Turing himself had clearly understood this as the central message of Gödel, that our ability to pass to stronger and stronger formal systems was necessarily non-algorithmic.  I replied that it was odd to appeal here to Turing, who of course had considered and rejected the “Gödelian case against AI” in 1950, on the ground that AI programs could make mathematical mistakes yet still be at least as smart as humans.  Penrose said that he didn’t consider that one of Turing’s better arguments; he then turned to me and asked whether I actually found Turing’s reply satisfactory.  I could see that it wasn’t a rhetorical debate question; he genuinely wanted to know!  I said that yes, I agreed with Turing’s reply.

Someone mentioned that Penrose had offered a lengthy rebuttal to at least twenty counterarguments to the Gödelian anti-AI case in Shadows of the Mind.  I affirmed that I’d read his lengthy rebuttal, and I focused on one particular argument in Shadows: that while it’s admittedly conceivable that individual mathematicians might be mistaken, might believe (for example) that a formal system was consistent even though it wasn’t, the mathematical community as a whole converges toward truth in these matters, and it’s that convergence that cries out for a non-algorithmic explanation.  I replied that it wasn’t obvious to me that set theorists do converge toward truth in these matters, in anything other than the empirical, higgedly-piggedly, no-guarantees sense in which a community of AI robots might also converge toward truth.  Penrose said I had misunderstood the argument.  But alas, time was running out, and we never managed to get to the bottom of it.

There was one aspect of the discussion that took me by complete surprise.  I’d expected to be roasted alive over my attempt to relate consciousness and free will to unpredictability, the No-Cloning Theorem, irreversible decoherence, microscopic degrees of freedom left over from the Big Bang, and the cosmology of de Sitter space.  Sure, my ideas might be orders of magnitude less crazy than anything Penrose proposes, but they’re still pretty crazy!  But that entire section of my talk attracted only minimal interest.  With the Seven Pines crowd, what instead drew fire were the various offhand “pro-AI / pro-computationalism” comments I’d made—comments that, because I hang out with Singularity types so much, I had ceased to realize could even possibly be controversial.

So for example, one audience member argued that an AI could only do what its programmers had told it to do; it could never learn from experience.  I could’ve simply repeated Turing’s philosophical rebuttals to what he called “Lady Lovelace’s Objection,” which are as valid today as they were 66 years ago.  Instead, I decided to fast-forward, and explain a bit how IBM Watson and AlphaGo work, how they actually do learn from past experience without violating the determinism of the underlying transistors.  As I went through this, I kept expecting my interlocutor to interrupt me and say, “yes, yes, of course I understand all that, but my real objection is…”  Instead, I was delighted to find, the interlocutor seemed to light up with newfound understanding of something he hadn’t known or considered.

Similarly, a biologist asked how I could possibly have any confidence that the brain is simulable by a computer, given how little we know about neuroscience.  I replied that, for me, the relevant issues here are “well below neuroscience” in the reductionist hierarchy.  Do you agree, I asked, that the physical laws relevant to the brain are encompassed by the Standard Model of elementary particles, plus Newtonian gravity?  If so, then just as Archimedes declared: “give me a long enough lever and a place to stand, and I’ll move the earth,” so too I can declare, “give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.”  The Church-Turing Thesis, I said, is so versatile that the only genuine escape from it is to propose entirely new laws of physics, exactly as Penrose does—and it’s to Penrose’s enormous credit that he understands that.

Afterwards, an audience member came up to me and said how much he liked my talk, but added, “a word of advice, from an older scientist: do not become the priest of a new religion of computation and AI.”  I replied that I’d take that to heart, but what was interesting was that, when I heard “priest of a new religion,” I’d expected that his warning would be the exact opposite of what it turned out to be.  To wit: “Do not become the priest of a new religion of unclonability, unpredictability, and irreversible decoherence.  Stick to computation—i.e., to conscious minds being copyable and predictable exactly like digital computer programs.”  I guess there’s no pleasing everyone!

Coincidental But Not-Wholly-Unrelated Announcement: My friend Robin Hanson has just released his long-awaited book The Age of Em: Work, Love, and Life When Robots Rule the Earth.  I read an early review copy of the book, and wrote the following blurb for the jacket:

Robin Hanson is a thinker like no other on this planet: someone so unconstrained by convention, so unflinching in spelling out the consequences of ideas, that even the most cosmopolitan reader is likely to find him as bracing (and head-clearing) as a mouthful of wasabi.  Now, in The Age of Em, he’s produced the quintessential Hansonian book, one unlike any other that’s ever been written.  Hanson is emphatic that he hasn’t optimized in any way for telling a good story, or for imparting moral lessons about the present: only for maximizing the probability that what he writes will be relevant to the actual future of our civilization.  Early in the book, Hanson estimates that probability as 10%.  His figure seems about right to me—and if you’re able to understand why that’s unbelievably high praise, then The Age of Em is for you.

Actually, my original blurb compared The Age of Em to Asimov’s Foundation series, with its loving attention to the sociology and politics of the remote future.  But that line got edited out, because the publisher (and Robin) wanted to make crystal-clear that The Age of Em is not science fiction, but just sober economic forecasting about a future dominated by copyable computer-emulated minds.

I would’ve attempted a real review of The Age of Em, but I no longer feel any need to, because Scott Alexander of SlateStarCodex has already hit this one out of the emulated park.

Second Coincidental But Not-Wholly-Unrelated Announcement: A reader named Nick Merrill recently came across this old quote of mine from Quantum Computing Since Democritus:

In a class I taught at Berkeley, I did an experiment where I wrote a simple little program that would let people type either “f” or “d” and would predict which key they were going to push next. It’s actually very easy to write a program that will make the right prediction about 70% of the time. Most people don’t really know how to type randomly. They’ll have too many alternations and so on. There will be all sorts of patterns, so you just have to build some sort of probabilistic model.

So Nick emailed me to ask whether I remembered how my program worked, and I explained it to him, and he implemented it as a web app, which he calls the “Aaronson Oracle.”

So give it a try!  Are you ready to test your free will, your Penrosian non-computational powers, your brain’s sensitivity to amplified quantum fluctuations, against the Aaronson Oracle?

Update: By popular request, Nick has improved his program so that it shows your previous key presses and its guesses for them.  He also fixed a “security flaw”: James Lee noticed that you could use the least significant digit of the program’s percentage correct so far, as a source of pseudorandom numbers that the program couldn’t predict!  So now the program only displays its percent correct rounded to the nearest integer.

Update (June 15): Penrose’s collaborator Stuart Hameroff has responded in the comments; see here (my reply here) and here.

Me interviewed by John Horgan (the author of “The End of Science”)

Thursday, April 21st, 2016

You can read it here.

It’s long (~12,000 words).  Rather than listing what this interview covers, it would be easier to list what it doesn’t cover.  (My favorite soda flavors?)

If you read this blog, much of what I say there will be old hat, but some of it will be new.  I predict that you’ll enjoy the interview iff you enjoy the blog.  Comments welcome.

Grading Trudeau on quantum computing

Sunday, April 17th, 2016

Update (4/19): Inspired by Trudeau’s performance (which they clocked at 35 seconds), Maclean’s magazine asked seven quantum computing researchers—me, Krysta Svore, Aephraim Steinberg, Barry Sanders, Davide Venturelli, Martin Laforest, and Murray Thom—to also explain quantum computing in 35 seconds or fewer.  You can see all the results here (here’s the audio from my entry).

The emails starting hitting me like … a hail of maple syrup from the icy north.  Had I seen the news?  Justin Trudeau, the dreamy young Prime Minister of Canada, visited the Perimeter Institute for Theoretical Physics in Waterloo, one of my favorite old haunts.  At a news conference at PI, as Trudeau stood in front of a math-filled blackboard, a reporter said to him: “I was going to ask you to explain quantum computing, but — when do you expect Canada’s ISIL mission to begin again, and are we not doing anything in the interim?”

Rather than answering immediately about ISIL, Trudeau took the opportunity to explain quantum computing:

“Okay, very simply, normal computers work, uh, by [laughter, applause] … no no no, don’t interrupt me.  When you walk out of here, you will know more … no, some of you will know far less about quantum computing, but most of you … normal computers work, either there’s power going through a wire, or not.  It’s 1, or a 0, they’re binary systems.  Uh, what quantum states allow for is much more complex information to be encoded into a single bit.  Regular computer bit is either a 1 or a 0, on or off.  A quantum state can be much more complex than that, because as we know [speeding up dramatically] things can be both particle and wave at the same times and the uncertainty around quantum states [laughter] allows us to encode more information into a much smaller computer.  So, that’s what exciting about quantum computing and that’s… [huge applause] don’t get me going on this or we’ll be here all day, trust me.”

What marks does Trudeau get for this?  On the one hand, the widespread praise for this reply surely says more about how low the usual standards for politicians are, and about Trudeau’s fine comic delivery, than about anything intrinsic to what he said.  Trudeau doesn’t really assert much here: basically, he just says that normal computers work using 1’s and 0’s, and that quantum computers are more complicated than that in some hard-to-explain way.  He gestures toward the uncertainty principle and wave/particle duality, but he doesn’t say anything about the aspects of QM most directly relevant to quantum computing—superposition or interference or the exponential size of Hilbert space—nor does he mention what quantum computers would or wouldn’t be used for.

On the other hand, I’d grade Trudeau’s explanation as substantially more accurate than what you’d get from a typical popular article.  For pay close attention to what the Prime Minister never says: he never says that a qubit would be “both 0 and 1 at the same time,” or any equivalent formulation.  (He does say that quantum states would let us “encode more information into a much smaller computer,” but while Holevo’s Theorem says that’s false for a common interpretation of “information,” it’s true for other reasonable interpretations.)  The humorous speeding up as he mentions particle/wave duality and the uncertainty principle clearly suggests that he knows it’s more subtle than just “0 and 1 at the same time,” and he also knows that he doesn’t really get it and that the journalists in the audience don’t either.  When I’m grading exams, I always give generous partial credit for honest admissions of ignorance.  B+.

Anyway, I’d be curious to know who at PI prepped Trudeau for this, and what they said.  Those with inside info, feel free to share in the comments (anonymously if you want!).

(One could also compare against Obama’s 2008 answer about bubblesort, which was just a mention of a keyword by comparison.)

Update: See also a Motherboard article where Romain Alléaume, Amr Helmy, Michele Mosca, and Aephraim Steinberg rate Trudeau’s answer, giving it 7/10, no score, 9/10, and 7/10 respectively.