Some time ago I had the privilege of interacting a bit with Sam Altman, president of the famed startup incubator Y Combinator (and a guy who’s thanked in pretty much everything Paul Graham writes). By way of our mutual friend, the renowned former quantum computing researcher Michael Nielsen, Sam got in touch with me to solicit suggestions for “outside-the-box” scientists and writers, for a new grant program that Y Combinator was starting. I found Sam eager to delve into the merits of any suggestion, however outlandish, and was delighted to be able to make a difference for a few talented people who needed support.

Sam has also been one of the Silicon Valley leaders who’s written most clearly and openly about the threat to America posed by Donald Trump and the need to stop him, and he’s donated tens of thousands of dollars to anti-Trump causes. Needless to say, I supported Sam on that as well.

Now Sam is under attack on social media, and there are even calls for him to resign as the president of Y Combinator. Like me two years ago, Sam has instantly become the corporeal embodiment of the “nerd privilege” that keeps the marginalized out of Silicon Valley.

Why? Because, despite his own emphatic anti-Trump views, Sam rejected demands to fire Peter Thiel (who has an advisory role at Y Combinator) because of Thiel’s support for Trump. Sam explained his reasoning at some length:

[A]s repugnant as Trump is to many of us, we are not going to fire someone over his or her support of a political candidate. As far as we know, that would be unprecedented for supporting a major party nominee, and a dangerous path to start down (of course, if Peter said some of the things Trump says himself, he would no longer be part of Y Combinator) … The way we got into a situation with Trump as a major party nominee in the first place was by not talking to people who are very different than we are … I don’t understand how 43% of the country supports Trump. But I’d like to find out, because we have to include everyone in our path forward. If our best ideas are to stop talking to or fire anyone who disagrees with us, we’ll be facing this whole situation again in 2020.

The usual criticism of nerds is that we might have narrow technical abilities, but we lack wisdom about human affairs. It’s ironic, then, that it appears to have fallen to Silicon Valley nerds to guard some of the most important human wisdom our sorry species ever came across—namely, the liberal ideals of the Enlightenment. Like Sam, I despise pretty much everything Trump stands for, and I’ve been far from silent about it: I’ve blogged, donated money, advocated vote swapping, endured anonymous comments like “kill yourself kike”—whatever seemed like it might help even infinitesimally to ensure the richly-deserved electoral thrashing that Trump mercifully seems to be headed for in a few weeks.

But I also, I confess, oppose the forces that apparently see Trump less as a global calamity to be averted, than as a golden opportunity to take down anything they don’t like that’s ever been spotted within a thousand-mile radius of Trump Tower. (Where does this Kevin Bacon game end, anyway? Do “six degrees of Trump” suffice to contaminate you?)

And not only do I not feel a shadow of a hint of a moral conflict here, but it seems to me that precisely the same liberal Enlightenment principles are behind both of these stances.

But I’d go yet further. It sort of flabbergasts me when social-justice activists don’t understand that, if we condemn not only Trump, not only his supporters, but even *vociferous Trump opponents who associate with Trump supporters (!)*, all we’ll do is to feed the narrative that got Trumpism as far as it has—namely, that of a smug, bubble-encased, virtue-signalling leftist elite subject to runaway political correctness spirals. Like, a hundred million Americans’ worldviews revolve around the fear of liberal persecution, and we’re going to change their minds by firing those who refuse to fire *them*? As a recent *Washington Post* story illustrates, the opposite approach is harder but can bear spectacular results.

Now, as for Peter Thiel: three years ago, he funded a small interdisciplinary workshop on the coast of France that I attended. With me there were a bunch of honest-to-goodness conservative Christians, a Freudian psychoanalyst, a novelist, a right-wing radio host, some scientists and Silicon Valley executives, and of course Thiel himself. Each, I found, offered tons to disagree about but also some morsels to learn.

Thiel’s worldview, focused on the technological and organizational greatness that (in his view) Western civilization used to have and has subsequently lost, was a bit too dark and pessimistic for me, and I’m a pretty dark and pessimistic person. Thiel gave a complicated, meandering lecture that involved comparing modern narratives about Silicon Valley entrepreneurs against myths of gods, heroes, and martyrs throughout history, such as Romulus and Remus (the legendary founders of Rome). The talk might have made more sense to Thiel than to his listeners.

At the same time, Thiel’s range of knowledge and curiosity was pretty awesome. He avidly followed all the talks (including mine, on P vs. NP and quantum complexity theory) and asked pertinent questions. When the conversation turned to D-Wave, and Thiel’s own decision not to invest in it, he laid out the conclusions he’d come to from an extremely quick look at the question, then quizzed me as to whether he’d gotten anything wrong. He hadn’t.

From that conversation among others, I formed the impression that Thiel’s success as an investor is, at least in part, down neither to luck nor to connections, but to a module in his brain that most people lack, which makes blazingly fast and accurate judgments about tech startups. No wonder Y Combinator would want to keep him as an adviser.

But, OK, I’m so used to the same person being spectacularly right on some things and spectacularly wrong on others, that it no longer causes even slight cognitive dissonance. You just take the issues one by one.

I was happy, on balance, when it came out that Thiel had financed the lawsuit that brought down Gawker Media. Gawker really *had* used its power to bully the innocent, and it had broken the law to do it. And if it’s an unaccountable, anti-egalitarian, billionaire Godzilla against a vicious, privacy-violating, nerd-baiting King Kong—well then, I guess I’m with Godzilla.

More recently, I was appalled when Thiel spoke at the Republican convention, pandering to the crowd with Fox-News-style attack lines that were unworthy of a mind of his caliber. I lost a lot of respect for Thiel that day. But that’s the thing: unlike with literally every other speaker at the GOP convention, my respect for Thiel had started from a point that *made a decrease possible.*

I reject huge parts of Thiel’s worldview. I also reject any worldview that would threaten me with ostracism for talking to Thiel, attending a workshop he sponsors, or saying anything good about him. This is not actually a difficult balance.

Today, when it sometimes seems like much of the world has united in salivating for a cataclysmic showdown between whites and non-whites, Christians and Muslims, “dudebros” and feminists, etc., and that the salivators differ mostly just in who they want to see victorious in the coming battle and who humiliated, it can feel lonely to stick up for naïve, outdated values like the free exchange of ideas, friendly disagreement, the presumption of innocence, and the primacy of the individual over the tribe. But those are the values that took us all the way from a bronze spear through the enemy’s heart to a snarky rebuttal on the arXiv, and they’ll continue to build anything worth building.

And now to watch the third debate (I’ll check the comments afterward)…

**Update (Oct. 20):** See also this post from a blog called TheMoneyIllusion. My favorite excerpt:

So let’s see. Not only should Trump be shunned for his appalling political views, an otherwise highly respected Silicon Valley entrepreneur who just happens to support Trump (along with 80 million other Americans) should also be shunned. And a person who despises Trump and works against him but who defends Thiel’s right to his own political views should also resign. Does that mean I should be shunned too? After all, I’m a guy who hates Trump, writing a post that defends a guy who hates Trump, who wrote a post defending a guy’s freedom to support Trump, who in turn supports Trump. And suppose my mother sticks up for me? Should she also be shunned?

It’s almost enough to make me vote . . . no, just kidding.

Question … Which people on the left are beyond the pale? Suppose Thiel had supported Hugo Chavez? How about Castro? Mao? Pol Pot? Perhaps the degrees of separation could be calibrated to the awfulness of the left-winger:

Chavez: One degree of separation. (Corbyn, Sean Penn, etc.)

Castro: Two degrees of separation is still toxic.

Lenin: Three degrees of separation.

Mao: Four degrees of separation.

Pol Pot: Five degrees of separation.

]]>*Thanks so much to Sanjeev Arora, Boaz Barak, Ran Raz, Peter Sarnak, and Amir Shpilka for organizing the conference and for inviting me to speak; to all the other participants and speakers for a great conference; and of course to Avi himself for being Avi. –SA*

I apologize that I wasn’t able to prepare slides for today’s talk. But the good news is that, ever since I moved to Texas two months ago, I now carry concealed chalk everywhere I go. [Pull chalk out of pocket]

My history with Avi goes back literally half my life. I spent a semester with him at Hebrew University, and then a year with him as a postdoc here at IAS. Avi has played a bigger role in my career than just about anyone—he helped me professionally, he helped me intellectually, and once I dated and then married an Israeli theoretical computer scientist (who was also a postdoc in Avi’s group), Avi even helped me learn Hebrew. Just today, Avi taught me the Hebrew word for the permanent of a matrix. It turns out that it’s [throaty noises] *pehhrmahnent*.

But it all started with a talk that Avi gave in Princeton in 2000, which I attended as a prospective graduate student. That talk was about the following function of an n×n matrix A∈R^{n×n}, the *permanent*:

$$ \text{Per}(A) = \sum_{\sigma \in S_n} \prod_{i=1}^n a_{i,\sigma(i)}. $$

Avi contrasted that function with a superficially similar function, the *determinant*:

$$ \text{Det}(A) = \sum_{\sigma \in S_n} (-1)^{\text{sgn}(\sigma)} \prod_{i=1}^n a_{i,\sigma(i)}. $$

In this talk, I want to share a few of the amazing things Avi said about these two functions, and how the things he said then reverberated through my entire career.

Firstly, like we all learn in kindergarten or whatever, the determinant is computable in polynomial time, for example by using Gaussian elimination. By contrast, Valiant proved in 1979 that computing the permanent is #P-complete—which means, at least as hard as any combinatorial counting problem, a class believed to be even harder than NP-complete.

So, despite differing from each other only by some innocent-looking -1 factors, which the determinant has but the permanent lacks, these two functions effectively engage different regions of mathematics. The determinant is linear-algebraic and geometric; it’s the product of the eigenvalues and the volume of the parallelipiped defined by the row vectors. But the permanent is much more stubbornly combinatorial. It’s not quite as pervasive in mathematics as the determinant is, though it does show up. As an example, if you have a bipartite graph G, then the permanent of G’s adjacency matrix counts the number of perfect matchings in G.

When n=2, computing the permanent doesn’t look too different from computing the determinant: indeed, we have

$$

\text{Per}\left(

\begin{array}

[c]{cc}%

a & b\\

c & d

\end{array}

\right) =\text{Det}\left(

\begin{array}

[c]{cc}%

a & -b\\

c & d

\end{array}

\right).

$$

But as n gets larger, the fact that the permanent is #P-complete means that it *must* get exponentially harder to compute than the determinant, unless basic complexity classes collapse. And indeed, to this day, the fastest known algorithm to compute an n×n permanent, *Ryser’s algorithm*, takes O(n2^{n}) time, which is only modestly better than the brute-force algorithm that just sums all n! terms.

Yet interestingly, not *everything* about the permanent is hard. So for example, if A is nonnegative, then in 1997, Jerrum, Sinclair, and Vigoda famously gave a polynomial-time randomized algorithm to approximate Per(A) to within a 1+ε multiplicative factor, for ε>0 as small as you like. As an even simpler example, if A is nonnegative and you just want to know whether its permanent is zero or nonzero, that’s equivalent to deciding whether a bipartite graph has at least one perfect matching. And we all know that that can be done in polynomial time.

OK, but the usual algorithm from the textbooks that puts the matching problem in the class P is already a slightly nontrivial one—maybe first grade rather than kindergarten! It involves repeatedly using depth-first search to construct augmenting paths, then modifying the graph, etc. etc.

Sixteen years ago in Princeton, the first thing Avi said that blew my mind is that there’s a vastly simpler polynomial-time algorithm to decide whether a bipartite graph has a perfect matching—or equivalently, to decide whether a nonnegative matrix has a zero or nonzero permanent. This algorithm is not quite as efficient as the textbook one, but it makes up for it by being more magical.

So here’s what you do: you start with the 0/1 adjacency matrix of your graph. I’ll do a 2×2 example, since that’s all I’ll be able to compute on the fly:

$$ \left(

\begin{array}

[c]{cc}%

1 & 1\\

0 & 1

\end{array}

\right). $$

Then you normalize each row so it sums to 1. In the above example, this would give

$$ \left(

\begin{array}

[c]{cc}%

\frac{1}{2} & \frac{1}{2} \\

0 & 1

\end{array}

\right). $$

Next you normalize each *column* so it sums to 1:

$$ \left(

\begin{array}

[c]{cc}%

1 & \frac{1}{3} \\

0 & \frac{2}{3}

\end{array}

\right). $$

OK, but now the problem is that the rows are no longer normalized, so you normalize them again:

$$ \left(

\begin{array}

[c]{cc}%

\frac{3}{4} & \frac{1}{4} \\

0 & 1

\end{array}

\right). $$

Then you normalize the columns again, and so on. You repeat something like n^{3}log(n) times. If, after that time, all the row sums and column sums have become within ±1/n of 1, then you conclude that the permanent was nonzero and the graph had a perfect matching. Otherwise, the permanent was zero and the graph had no perfect matching.

What gives? Well, it’s a nice exercise to prove why this works. I’ll just give you a sketch: first, when you multiply any row or column of a matrix by a scalar, you multiply the permanent by that same scalar. By using that fact, together with the arithmetic-geometric mean inequality, it’s possible to prove that, in every iteration but the first, when you rebalance all the rows or all the columns to sum to 1, you can’t be decreasing the permanent. The permanent increases monotonically.

Second, *if* the permanent is nonzero, then after the first iteration it’s at least 1/n^{n}, simply because we started with a matrix of 0’s and 1’s.

Third, the permanent is always at most the product of the row sums or the product of the column sums, which after the first iteration is 1.

Fourth, at any iteration where there’s some row sum or column sum that’s far from 1, rescaling must not only increase the permanent, but increase it by an appreciable amount—like, a 1+1/n^{2} factor or so.

Putting these four observations together, we find that *if* the permanent is nonzero, then our scaling procedure must terminate after a polynomial number of steps, with every row sum and every column sum close to 1—since otherwise, the permanent would just keep on increasing past its upper bound of 1.

But a converse statement is also true. Suppose the matrix can be scaled so that every row sum and every column sum gets within ±1/n of 1. Then the rescaled entries define a *flow* through the bipartite graph, with roughly the same amount of flow through each of the 2n vertices. But if such a flow exists, then Hall’s Theorem tells you that there must be a perfect matching (and hence the permanent must be nonzero)—since if a matching *didn’t* exist, then there would necessarily be a bottleneck preventing the flow.

Together with Nati Linial and Alex Samorodnitsky, Avi generalized this to show that scaling the rows and columns gives you a polynomial-time algorithm to *approximate* the permanent of a nonnegative matrix. This basically follows from the so-called Egorychev-Falikman Theorem, which says that the permanent of a doubly stochastic matrix is at least n!/n^{n}. The approximation ratio that you get this way, ~e^{n}, isn’t nearly as good as Jerrum-Sinclair-Vigoda’s, but the advantage is that the algorithm is *deterministic* (and also ridiculously simple).

For myself, though, I just filed away this idea of matrix scaling for whenever I might need it. It didn’t take long. A year after Avi’s lecture, when I was a beginning grad student at Berkeley, I was obsessing about the foundations of quantum mechanics. Specifically, I was obsessing about the fact that, when you measure a quantum state, the rules of quantum mechanics tell you how to calculate the probability that you’ll see a particular outcome. But the rules are silent about so-called *multiple-time* or *transition* probabilities. In other words: suppose we adopt Everett’s Many-Worlds view, according to which quantum mechanics needs to be applied consistently to every system, regardless of scale, so in particular, the state of the entire universe (including us) is a quantum superposition state. We perceive just one branch, but there are also these other branches where we made different choices or where different things happened to us, etc.

OK, fine: for me, that’s not the troubling part! The troubling part is that quantum mechanics rejects as meaningless questions like the following: given that you’re in *this* branch of the superposition at time t_{1}, what’s the probability that you’ll be in *that* branch at time t_{2}, after some unitary transformation is applied? Orthodox quantum mechanics would say: well, either someone measured you at time t_{1}, in which case their act of measuring collapsed the superposition and created a whole new situation. Or else no one measured at t_{1}—but in that case, your state at time t_{1} *was* the superposition state, full stop. It’s sheer metaphysics to imagine a “real you” that jumps around from one branch of the superposition to another, having a sequence of definite experiences.

Granted, in practice, branches of the universe’s superposition that split from each other tend never to rejoin, for the same thermodynamic reasons why eggs tend never to unscramble themselves. And as long as the history of the Everettian multiverse has the structure of a tree, we *can* sensibly define transition probabilities. But if, with some technology of the remote future, we were able to do quantum interference experiments on human brains (or other conscious entities), the rules of quantum mechanics would no longer predict what those beings should see—not even probabilistically.

I was interested in the question: suppose we just wanted to *postulate* transition probabilities, with the transitions taking place in some fixed orthogonal basis. What would be a mathematically reasonable way to do that? And it occurred to me that one thing you could do is the following. Suppose for simplicity that you have a pure quantum state, which is just a unit vector of n complex numbers called *amplitudes*:

$$ \left(

\begin{array}

[c]{c}%

\alpha_{1}\\

\vdots\\

\alpha_{n}%

\end{array}

\right) $$

Then the first rule of quantum mechanics says that you can apply any *unitary transformation* U (that is, any norm-preserving linear transformation) to map this state to a new one:

$$ \left(

\begin{array}

[c]{c}%

\beta_{1}\\

\vdots\\

\beta_{n}%

\end{array}

\right) =\left(

\begin{array}

[c]{ccc}%

u_{11} & \cdots & u_{1n}\\

\vdots & \ddots & \vdots\\

u_{n1} & \cdots & u_{nn}%

\end{array}

\right) \left(

\begin{array}

[c]{c}%

\alpha_{1}\\

\vdots\\

\alpha_{n}%

\end{array}

\right). $$

The second rule of quantum mechanics, the famous *Born Rule*, says that if you measure in the standard basis before applying U, then the probability that you’ll find youself in state i equals |α_{i}|^{2}. Likewise, if you measure in the standard basis *after* applying U, the probability that you’ll find youself in state j equals |β_{j}|^{2}.

OK, but what’s the probability that you’re in state i at the initial time, *and then* state j at the final time? These joint probabilities, call them p_{ij}, had better add up to |α_{i}|^{2} and |β_{j}|^{2}, if we sum the rows and columns respectively. And ideally, they should be “derived” in some way from the unitary U—so that for example, if u_{ij}=0 then p_{ij}=0 as well.

So here’s something you could do: start by replacing each u_{ij} by its absolute value, to get a nonnegative matrix. Then, normalize the i^{th} row so that it sums to |α_{i}|^{2}, for each i. Then normalize the j^{th} column so that it sums to |β_{j}|^{2}, for each j. Then normalize the rows, then the columns, and keep iterating until hopefully you end up with all the rows *and* columns having the right sums.

So the first question I faced was, does this process converge? And I remembered what Avi taught me about the permanent. In this case, because of the nonuniform row and column scalings, the permanent no longer works as a progress measure, but there’s something else that *does* work. Namely, as a first step, we can use the Max-Flow/Min-Cut Theorem to show that there exists a nonnegative matrix F=(f_{ij}) such that f_{ij}=0 whenever u_{ij}=0, and also

$$ \sum_j f_{ij} = \left|\alpha_i\right|^2 \forall i,\ \ \ \ \ \sum_i f_{ij} = \left|\beta_j\right|^2 \forall j. $$

Then, letting M=(m_{ij}) be our current rescaled matrix (so that initially m_{ij}:=|u_{ij}|), we use

$$ \prod_{i,j : u_{ij}\ne 0} m_{ij}^{f_{ij}} $$

as our progress measure. By using the nonnegativity of the Kullback-Leibler divergence, one can prove that this quantity never decreases. So then, just like with 0/1 matrices and the permanent, we get eventual convergence, and indeed convergence after a number of iterations that’s polynomial in n.

I was pretty stoked about this until I went to the library, and discovered that Erwin Schrödinger had proposed the same matrix scaling process in 1931! And Masao Nagasawa and others then rigorously analyzed it. OK, but their motivations were somewhat different, and for some reason they never talked about finite-dimensional matrices, only infinite-dimensional ones.

I can’t resist telling you my favorite open problem about this matrix scaling process: namely, is it stable under small perturbations? In other words, if I change one of the α_{i}‘s or u_{ij}‘s by some small ε, then do the final p_{ij}‘s also change by at most some small δ? To clarify, several people have shown me how to prove that the mapping to the p_{ij}‘s is *continuous*. But for computer science applications, one needs something stronger: namely that when the matrix M, and the row and column scalings, actually arise from a unitary matrix in the way above, we get strong uniform continuity, with a 1/n^{O(1)} change to the inputs producing only a 1/n^{O(1)} change to the outputs (and hopefully even better than that).

The more general idea that I was groping toward or reinventing here is called a *hidden-variable theory*, of which the most famous example is Bohmian mechanics. Again, though, Bohmian mechanics has the defect that it’s only formulated for some exotic state space that the physicists care about for some reason—a space involving pointlike objects called “particles” that move around in 3 Euclidean dimensions (why 3? why not 17?).

Anyway, this whole thing led me to wonder: under the Schrödinger scaling process, or something like it, what’s the computational complexity of *sampling* an entire history of the hidden variable through a quantum computation? (“If, at the moment of your death, your whole life history flashes before you in an instant, what can you *then* efficiently compute?”)

Clearly the complexity is at least BQP (i.e., quantum polynomial time), because even sampling where the hidden variable is at a *single* time is equivalent to sampling the output distribution of a quantum computer. But could the complexity be even more than BQP, because of the *correlations* between the hidden variable values at different times? I noticed that, indeed, sampling a hidden variable history would let you do some crazy-seeming things, like solve the Graph Isomorphism problem in polynomial time (OK, fine, that seemed more impressive at the time than it does after Babai’s breakthrough), or find collisions in arbitrary cryptographic hash functions, or more generally, solve any problem in the complexity class SZK (Statistical Zero Knowledge).

But you might ask: what evidence do we have that any these problems are hard even for garden-variety quantum computers? As many of you know, it’s widely conjectured today that NP⊄BQP—i.e., that quantum computers can’t solve NP-complete problems in polynomial time. And in the “black box” setting, where all you know how to do is query candidate solutions to your NP-complete problem to check whether they’re valid, it’s been *proven* that quantum computers don’t give you an exponential speedup: the best they can give is the square-root speedup of Grover’s algorithm.

But for these SZK problems, like finding collisions in hash functions, who the hell knows? So, this is the line of thought that led me to probably the most important thing I did in grad school, the so-called quantum lower bound for collision-finding. That result says that, if (again) your hash function is only accessible as a black box, then a quantum computer can provide at most a polynomial speedup over a classical computer for finding collisions in it (in this case, it turns out, at most a two-thirds power speedup). There are several reasons you might care about that, such as showing that one of the basic building blocks of modern cryptography could still be secure in a world with quantum computers, or proving an oracle separation between SZK and BQP. But my original motivation was just to understand how transition probabilities would change quantum computation.

The permanent has also shown up in a much more direct way in my work on quantum computation. If we go back to Avi’s lecture from 2000, a *second* thing he said that blew my mind was that apparently, or so he had heard, even the fundamental particles of the universe know something about the determinant and the permanent. In particular, he said, fermions—the matter particles, like the quarks and electrons in this stage—have transition amplitudes that are determinants of matrices. Meanwhile, bosons—the force-carrying particles, like the photons coming from the ceiling that let you see this talk—have transition amplitudes that are permanents of matrices.

Or as Steven Weinberg, one of the great physicists on earth, memorably put it in the first edition of his recent quantum mechanics textbook: “in the case of bosons, it is also a determinant, except without minus signs.” I’ve had the pleasure of getting to know Weinberg at Austin, so recently I asked him about that line. He told me that *of course* he knew that the determinant without minus signs is called a permanent, but he thought no one else would know! As far as he knew, the permanent was just some esoteric function used by a few quantum field theorists who needed to calculate boson amplitudes.

Briefly, the reason why the permanent and determinant turn up here is the following: whenever you have n particles that are identical, to calculate the amplitude for them to do something, you need to sum over all n! possible permutations of the particles. Furthermore, each contribution to the sum is a product of n complex numbers, one u_{ij} for each particle that hops from i to j. But there’s a difference: when you swap two identical bosons, nothing happens, and that’s why bosons give rise to the permanent (of an n×n complex matrix, if there are n bosons). By contrast, when you swap two identical fermions, the amplitude for that state of the universe gets multiplied by -1, and that’s why fermions give rise to the determinant.

Anyway, Avi ended his talk with a quip about how unfair it seemed to the bosons that they should have to work so much harder than the fermions just to calculate where they should be!

And then that one joke of Avi—that way of looking at things—rattled around in my head for a decade, like a song I couldn’t get rid of. It raised the question: wait a minute, bosons—particles that occur in Nature—are governed by a #P-complete function? Does that mean we could actually use bosons to solve #P-complete problems in polynomial time? That seems ridiculous, like the kind of nonsense I’m fighting every few weeks on my blog! As I said before, most of us don’t even expect quantum computers to be able to solve NP-complete problems in polynomial time, let alone #P-complete ones.

As it happens, Troyansky and Tishby had already taken up that puzzle in 1996. (Indeed Avi, being the social butterfly and hub node of our field that he is, had learned about the role of permaments and determinants in quantum mechanics from them.) What Troyansky and Tishby said was, it’s true that if you have a system of n identical, non-interacting bosons, their transition amplitudes are given by permanents of n×n matrices. OK, but amplitudes in quantum mechanics are not directly observable. They’re just what you use to calculate the probability that you’ll see this or that measurement outcome. But if you try to encode a hard instance of a #P-complete problem into a bosonic system, the relevant amplitudes will in general be exponentially small. And that means that, if you want a decent estimate of the permanent, you’ll need to repeat the experiment an exponential number of times. So OK, they said, nice try, but this doesn’t give you a computational advantage after all in calculating the permanent compared to classical brute force.

In our 2011 work on BosonSampling, my student Alex Arkhipov and I reopened the question. We said, not so fast. It’s true that bosons don’t seem to help you in estimating the permanent of a *specific* matrix of your choice. But what if your goal was just to sample a *random* n×n matrix A∈C^{n×n}, in a way that’s somehow biased toward matrices with larger permanents? Now, *why* would that be your goal? I have no idea! But this sampling is something that a bosonic system would easily let you do.

So, what Arkhipov and I proved was that this gives rise to a class of probability distributions that can be sampled in quantum polynomial time (indeed, by a very rudimentary type of quantum computer), but that can’t be sampled in classical polynomial time unless the polynomial hierarchy collapses to the third level. And even though you’re not solving a #P-complete problem, the #P-completeness of the permanent still plays a crucial role in explaining why the sampling problem is hard. (Basically, one proves that the probabilities are #P-hard even to approximate, but that *if* there were a fast classical sampling algorithm, then the probabilities could be approximated in the class BPP^{NP}. So if a fast classical sampling algorithm existed, then P^{#P} would equal BPP^{NP}, which would collapse the polynomial hierarchy by Toda’s Theorem.)

When we started on this, Arkhipov and I thought about it as just pure complexity theory—conceptually clarifying what role the #P-completeness of the permanent plays in physics. But then at some point it occurred to us: bosons (such as photons) *actually exist*, and experimentalists in quantum optics like to play with them, so maybe they could demonstrate some of this stuff in the lab. And as it turned out, the quantum optics people were looking for something to do at the time, and they ate it up.

Over the past five years, a trend has arisen in experimental physics that goes by the name “Quantum Supremacy,” although some people are now backing away from the name because of Trump. The idea is: without yet having a universal quantum computer, can we use the hardware that we’re able to build today to demonstrate the reality of a quantum-computational speedup as clearly as possible? Not necessarily for a useful problem, but just for *some* problem? Of course, no experiment can prove that something is scaling polynomially rather than exponentially, since that’s an asymptotic statement. But an experiment could certainly raise the stakes for the people who deny such a statement—for example, by solving something a trillion times faster than we know how to solve it otherwise, using methods for which we don’t know a reason for them *not* to scale.

I like to say that for me, the #1 application of quantum computing, more than breaking RSA or even simulating physics and chemistry, is simply disproving the people who say that quantum computing is impossible! So, quantum supremacy targets *that* application.

Experimental BosonSampling has become a major part of the race to demonstrate quantum supremacy. By now, at least a half-dozen groups around the world have reported small-scale implementations—the record, so far, being an experiment at Bristol that used 6 photons, and experimentally confirmed that, yes, their transition amplitudes are given by permanents of 6×6 complex matrices. The challenge now is to build single-photon sources that are good enough that you could scale up to (let’s say) 30 photons, which is where you’d really start seeing a quantum advantage over the best known classical algorithms. And again, this whole quest really started with Avi’s joke.

A year after my and Arkhipov’s work, I noticed that one also can run the connection between quantum optics and the permanent in the “reverse” direction. In other words: with BosonSampling, we used the famous theorem of Valiant, that the permanent is #P-complete, to help us argue that bosons can solve hard sampling problems. But if we know by some other means that quantum optics lets us encode #P-complete problems, then we can use that to give an independent, “quantum” proof that the permanent is #P-complete in the first place! As it happens, there *is* another way to see why quantum optics lets us encode #P-complete problems. Namely, we can use celebrated work by Knill, Laflamme, and Milburn (KLM) from 2001, which showed how to perform universal quantum computation using quantum optics with the one additional resource of “feed-forward measurements.” With minor modifications, the construction by KLM also lets us encode a #P-complete problem into a bosonic amplitude, which we know is a permanent—thereby proving that the permanent is #P-complete, in what I personally regard as a much more intuitive way than Valiant’s original approach based on cycle covers. This illustrates a theme that we’ve seen over and over in the last 13 years or so, which is the use of quantum methods and arguments to gain insight even about classical computation.

Admittedly, I wasn’t proving anything here in classical complexity theory that wasn’t already known, just giving a different proof for an old result! Extremely recently, however, my students Daniel Grier and Luke Schaeffer have extended my argument based on quantum optics, to show that computing the permanent of a unitary or orthogonal matrix is #P-complete. (Indeed, even over finite fields of characteristic k, computing the permanent of an orthogonal matrix is a Mod_{k}P-complete problem, as long as k is not 2 or 3—which turns out to be the tight answer.) This is not a result that we previously knew by *any* means, whether quantum or classical.

I can’t resist telling you the biggest theoretical open problem that arose from my and Arkhipov’s work. We would like to say: even if you had a polynomial-time algorithm that sampled a probability distribution that was merely *close*, in variation distance, to the BosonSampling distribution, that would already imply a collapse of the polynomial hierarchy. But we’re only able to prove that assuming a certain problem is #P-complete, which no one has been able to prove #P-complete. That problem is the following:

Given an n×n matrix A, each of whose entries is an i.i.d. complex Gaussian with mean 0 and variance 1 (that is, drawn from N(0,1)

_{C}), estimate |Per(A)|^{2}, to within additive error ±ε·n!, with probability at least 1-δ over the choice of A. Do this in time polynomial in n, 1/ε, and 1/δ.

Note that, if you care about exactly computing the permanent of a Gaussian random matrix, *or* about approximating the permanent of an arbitrary matrix, we know how to prove both of those problems #P-complete. The difficulty “only” arises when we combine approximation and average-case in the same problem.

At the moment, we don’t even know something more basic, which is: what’s the *distribution* over |Per(A)|^{2}, when A is an n×n matrix of i.i.d. N(0,1)_{C} Gaussians? Based on numerical evidence, we conjecture that the distribution converges to lognormal as n gets large. By using the interpretation of the determinant as the volume of a parallelipiped, we can prove that the distribution over |Det(A)|^{2} converges to lognormal. And the distribution over |Per(A)|^{2} looks almost the same when you plot it. But not surprisingly, the permanent is harder to analyze.

This brings me to my final vignette. Why would anyone even suspect that approximating the permanent of a Gaussian *random* matrix would be a #P-hard problem? Well, because if you look at the permanent of an n×n matrix over a large enough *finite* field, say F_{p}, that function famously has the property of *random self-reducibility*. This means: the ability to calculate such a permanent in polynomial time, on 90% all matrices in F_{p}^{n×n}, or even for that matter on only 1% of them, implies the ability to calculate it in polynomial time on *every* such matrix.

The reason for this is simply that the permanent is a low-degree polynomial, and low-degree polynomials have extremely useful error-correcting properties. In particular, if you can compute such a polynomial on any large fraction of points, then you can do noisy polynomial interpolation (e.g., the Berlekamp-Welch algorithm, or list decoding), in order to get the value of the polynomial on an *arbitrary* point.

I don’t specifically remember Avi talking about the random self-reducibility of the permanent in his 2000 lecture, but he obviously *would have* talked about it! And it was really knowing about the random self-reducibility of the permanent, and how powerful it was, that let me and Alex Arkhipov to the study of BosonSampling in the first place.

In complexity theory, the random self-reducibility of the permanent is hugely important because it sort of the spark for some of our most convincing examples of *non-relativizing* results—that is, results that fail relative to a suitable oracle. The most famous such result is that #P, and for that matter even PSPACE, admit interactive protocols (the IP=PSPACE theorem). In the 1970s, Baker, Gill, and Solovay pointed out that non-relativizing methods would be needed to resolve P vs. NP and many of the other great problems of the field.

In 2007, Avi and I wrote our only joint paper so far. In that paper, we decided to take a closer look at the non-relativizing results based on interactive proofs. We said: while it’s true that these results don’t relativize—that is, there are oracles relative to which they fail—nevertheless, these results hold relative to all oracles that themselves encode low-degree polynomials over finite fields (such as the permanent). So, introducing a term, Avi and I said that results like IP=PSPACE *algebrize*.

But then we also showed that, if you want to prove P≠NP—or for that matter, even prove circuit lower bounds that go “slightly” beyond what’s already known (such as NEXP⊄P/poly)—you’ll need techniques that are not only non-relativizing, but *also* non-algebrizing. So in some sense, the properties of the permanent that are used (for example) in proving that it has an interactive protocol, just “aren’t prying the black box open wide enough.”

I have a more recent result, from 2011 or so, that I never got around to finishing a paper about. In this newer work, I decided to take another look at the question: what is it about the permanent that *actually* fails to relativize? And I prove the following result: relative to an *arbitrary* oracle A, the class #P has complete problems that are both random self-reducible and downward self-reducible (that is, reducible to smaller instances of the same problem). So, contrary to what certainly I and maybe others had often thought, it’s *not* the random self-reducibility of the permanent that’s the crucial thing about it. What’s important, instead, is a further property that the permanent has, of being *self-checkable* and *self-correctible*.

In other words: given (say) a noisy circuit for the permanent, it’s not just that you can correct that circuit to compute whichever low-degree polynomial it was close to computing. Rather, it’s that you can confirm that the polynomial is in fact the permanent, and nothing else.

I like the way Ketan Mulmuley thinks about this phenomenon in his Geometric Complexity Theory, which is a speculative, audacious program to try to *prove* that the permanent is harder than the determinant, and to tackle the other great separation questions of complexity theory (including P vs. NP), by using algebraic geometry and representation theory. Mulmuley says: the permanent is a polynomial in the entries of an n×n matrix that not only satisfies certain symmetries (e.g., under interchanging rows or columns), but is *uniquely characterized* by those symmetries. In other words, if you find a polynomial that passes certain tests—for example, if it behaves in the right way under rescaling and interchanging rows and columns—then that polynomial *must* be the permanent, or a scalar multiple of the permanent. Similarly, if you find a polynomial that passes the usual interactive proof for the permanent, that polynomial must be the permanent. I think this goes a long way toward explaining why the permanent is so special: it’s not just any hard-to-compute, low-degree polynomial; it’s one that you can recognize when you come across it.

I’ve now told you about the eventual impact of one single survey talk that Avi gave 16 years ago—not even a particularly major or important one. So you can only imagine what Avi’s impact must have been on all of us, if you integrate over all the talks he’s given and papers he’s written and young people he’s mentored and connections he’s made his entire career. May that impact be permanent.

]]>In the few weeks since I last overcame the activation barrier to blog, here are some things that happened.

**Politics**

Friday’s revelation, of Trump boasting on tape to George W. Bush’s cousin about his crotch-grabbing escapades, did not increase my opposition to Trump, for a very simple reason: because I’d already opposed Trump by the maximum amount that’s possible. Nevertheless, I’ll be gratified if this news brings Trump down, and leads to the landslide defeat he’s deserved from the beginning for 10^{1000} reasons.

Still, history (including the history of this election) teaches us not to take things for granted. So if you’re *still* thinking of voting for Trump, let me recommend Scott Alexander’s endorsement of “anyone but Trump.” I’d go even further than my fellow Scott A. in much of what he says, but his post is nevertheless a masterful document, demonstrating how someone who *nobody* could accuse of being a statist social-justice warrior, but who “merely” has a sense for science and history and Enlightenment ideals and the ironic and absurd, can reach the conclusion that Trump had better be stopped, and with huge argumentative margin to spare.

See also an interview with me on *Huffington Post* about TrumpTrading, conducted by Linchuan Zhang. If you live in a swing state and support Johnson, or in a safe state and support Hillary, I still recommend signing up, since even a 13% probability of a Trump win is too high. I’ve found a partner in Ohio, a libertarian-leaning professor. The only way I can foresee not going through with the swap, is if the bus tape causes Trump’s popularity to drop so precipitously that *Texas* becomes competitive.

In the meantime, it’s also important that we remain vigilant about the integrity of the election—not about in-person voter fraud, which statistically doesn’t exist, but about intimidation at the polls and the purging of eligible voters and tampering with electronic voting machines. As I’ve mentioned before on this blog, my childhood friend Alex Halderman, now a CS professor at the University of Michigan, has been at the forefront of demonstrating the security problems with electronic voting machines, and advocating for paper trails. Alex and his colleagues have actually succeeded in influencing how elections are conducted in many states—but not in all of them. If you want to learn more, check out an in-depth profile of Alex in the latest issue of *Playboy*. (There’s no longer nudity in *Playboy*, so you can even read the thing at work…)

**Now On To SCIENCE**

As some of you probably saw, Mohammad Bavarian, Giulio Gueltrini, and I put out a new paper about computability theory in a universe with closed timelike curves. This complements my and John Watrous’s earlier work about *complexity* theory in a CTC universe, where we showed that finding a fixed-point of a bounded superoperator is a PSPACE-complete problem. In the new work, we show that finding a fixed-point of an *un*bounded superoperator has the same difficulty as the halting problem.

Some of you will also have seen that folks from the Machine Intelligence Research Institute (MIRI)—Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor—recently put out a major 130-page paper entitled “Logical Induction”. (See also their blog announcement.) This paper takes direct aim at a question that’s come up repeatedly in the comments section of this blog: namely, how can we sensibly assign probabilities to mathematical statements, such as “the 10^{10^1000}th decimal digit of π is a 3″? The paper proposes an essentially *economic* framework for that question, involving a marketplace for “mathematical truth futures,” in which new mathematical truths get revealed one by one, and one doesn’t want any polynomial-time traders to be able to make an infinite amount of money by finding patterns in the truths that the prices haven’t already factored in. I won’t be able to do justice to the work in this paragraph (or even come close), but I hope this sophisticated paper gets the attention it deserves from mathematicians, logicians, CS theorists, AI people, economists, and anyone else who’s ever wondered how a “Bayesian” could sleep at night after betting on (say) the truth or falsehood of Goldbach’s Conjecture. Feel free to discuss in the comments section.

My PhD student Adam Bouland and former visiting student Lijie Chen, along with Dhiraj Holden, Justin Thaler, and Prashant Vasudevan, have put out a new paper that achieves an oracle separation between the complexity classes SZK and PP (among many other things)—thereby substantially generalizing my quantum lower bound for the collision problem, and solving an open problem that I’d thought about without success since 2002. Huge relativized congratulations to them!

A new paper by my PhD student Shalev Ben-David and Or Sattath, about using ideas from quantum money to create signed quantum tokens, has been making the rounds on social media. Why? Read the abstract and see for yourself! (My only “contribution” was to tell them not to change a word.)

Several people wrote in to tell me about a recent paper by Henry Lin and Max Tegmark, which tries to use physics analogies and intuitions to explain why deep learning works as well as it does. To my inexpert eyes, the paper seemed to contain a lot of standard insights from computational learning theory (for example, the need to exploit symmetries and regularities in the world to get polynomial-size representations), but expressed in a different language. What confused me most was the paper’s claim to prove “no-flattening theorems” showing the necessity of large-depth neural networks—since in the sense I would mean, such a theorem couldn’t possibly be proved without a major breakthrough in computational complexity (e.g., separating the levels of the class TC^{0}). Again, anyone who understands what’s going on is welcome to share in the comments section.

Sevag Gharibian asked me to advertise that the Call for Papers for the 2017 Conference on Computational Complexity, to be held July 6-9 in Riga, Latvia, is now up.

]]>It’s wonderful to be here at QCRYPT among so many friends—this is the first significant conference I’ve attended since I moved from MIT to Texas. I do, however, need to register a complaint with the organizers, which is: why wasn’t I allowed to bring my concealed firearm to the conference? You know, down in Texas, we don’t look too kindly on you academic elitists in Washington DC telling us what to do, who we can and can’t shoot and so forth. Don’t mess with Texas! As you might’ve heard, many of us Texans even support a big, beautiful, physical wall being built along our border with Mexico. Personally, though, I don’t think the wall proposal goes far enough. Forget about illegal immigration and smuggling: I don’t even want Americans and Mexicans to be able to win the CHSH game with probability exceeding 3/4. Do any of you know what kind of wall could prevent *that*? Maybe a *meta*physical wall.

OK, but that’s not what I wanted to talk about. When Yi-Kai asked me to give an after-dinner talk, I wasn’t sure whether to try to say something actually relevant to quantum cryptography or just make jokes. So I’ll do something in between: I’ll tell you about research directions in quantum cryptography that are *also* jokes.

The subject of this talk is a deep theorem that stands as one of the crowning achievements of our field. I refer, of course, to the No-Cloning Theorem. Almost everything we’re talking about at this conference, from QKD onwards, is based in some way on quantum states being unclonable. If you read Stephen Wiesner’s paper from 1968, which founded quantum cryptography, the No-Cloning Theorem already played a central role—although Wiesner didn’t call it that. By the way, here’s my #1 piece of research advice to the students in the audience: if you want to become immortal, just find some fact that everyone already knows and give it a name!

I’d like to pose the question: why should our universe be governed by physical laws that make the No-Cloning Theorem true? I mean, it’s *possible* that there’s some other reason for our universe to be quantum-mechanical, and No-Cloning is just a byproduct of that. No-Cloning would then be like the armpit of quantum mechanics: not there because it does anything useful, but just because there’s gotta be *something* under your arms.

OK, but No-Cloning *feels* really fundamental. One of my early memories is when I was 5 years old or so, and utterly transfixed by my dad’s home fax machine, one of those crappy 1980s fax machines with wax paper. I kept thinking about it: is it really true that a piece of paper gets transmaterialized, sent through a wire, and reconstituted at the other location? Could I have been *that* wrong about how the universe works? Until finally I got it—and once you get it, it’s hard even to recapture your original confusion, because it becomes so obvious that the world is made not of stuff but of copyable bits of information. “Information wants to be free!”

The No-Cloning Theorem represents nothing less than a partial return to the view of the world that I had before I was five. It says that quantum information *doesn’t* want to be free: it wants to be private. There is, it turns out, a kind of information that’s tied to a particular place, or set of places. It can be moved around, or even teleported, but it can’t be copied the way a fax machine copies bits.

So I think it’s worth at least entertaining the possibility that we don’t have No-Cloning because of quantum mechanics; we have quantum mechanics because of No-Cloning—or because quantum mechanics is the simplest, most elegant theory that has unclonability as a core principle. But if so, that just pushes the question back to: why *should* unclonability be a core principle of physics?

**Quantum Key Distribution**

A first suggestion about this question came from Gilles Brassard, who’s here. Years ago, I attended a talk by Gilles in which he speculated that the laws of quantum mechanics are what they are *because* Quantum Key Distribution (QKD) has to be possible, while bit commitment has to be *im*possible. If true, that would be awesome for the people at this conference. It would mean that, far from being this exotic competitor to RSA and Diffie-Hellman that’s distance-limited and bandwidth-limited and has a tiny market share right now, QKD would be the entire reason why the universe is as it is! Or maybe what this really amounts to is an appeal to the Anthropic Principle. Like, if QKD *hadn’t* been possible, then we wouldn’t be here at QCRYPT to talk about it.

**Quantum Money**

But maybe we should search more broadly for the reasons why our laws of physics satisfy a No-Cloning Theorem. Wiesner’s paper sort of hinted at QKD, but the main thing it had was a scheme for unforgeable quantum money. This is one of the most direct uses imaginable for the No-Cloning Theorem: to store economic value in something that it’s physically impossible to copy. So maybe *that’s* the reason for No-Cloning: because God wanted us to have e-commerce, and didn’t want us to have to bother with blockchains (and certainly not with credit card numbers).

The central difficulty with quantum money is: how do you authenticate a bill as genuine? (OK, fine, there’s also the dificulty of how to keep a bill coherent in your wallet for more than a microsecond or whatever. But we’ll leave that for the engineers.)

In Wiesner’s original scheme, he solved the authentication problem by saying that, whenever you want to verify a quantum bill, you bring it back to the bank that printed it. The bank then looks up the bill’s classical serial number in a giant database, which tells the bank in which basis to measure each of the bill’s qubits.

With this system, you can actually get information-theoretic security against counterfeiting. OK, but the fact that you have to bring a bill to the bank to be verified negates much of the advantage of quantum money in the first place. If you’re going to keep involving a bank, then why not just use a credit card?

That’s why over the past decade, some of us have been working on *public-key* quantum money: that is, quantum money that anyone can verify. For this kind of quantum money, it’s easy to see that the No-Cloning Theorem is no longer enough: you also need some cryptographic assumption. But OK, we can consider that. In recent years, we’ve achieved glory by proposing a huge variety of public-key quantum money schemes—and we’ve achieved even greater glory by breaking almost all of them!

After a while, there were basically two schemes left standing: one based on knot theory by Ed Farhi, Peter Shor, et al. That one has been proven to be secure under the assumption that it can’t be broken. The second scheme, which Paul Christiano and I proposed in 2012, is based on hidden subspaces encoded by multivariate polynomials. For our scheme, Paul and I were able to do better than Farhi et al.: we gave a *security reduction*. That is, we *proved* that our quantum money scheme is secure, *unless* there’s a polynomial-time quantum algorithm to find hidden subspaces encoded by low-degree multivariate polynomials (yadda yadda, you can look up the details) with much greater success probability than we thought possible.

Today, the situation is that my and Paul’s security proof remains completely valid, but meanwhile, our money is completely insecure! Our reduction means the opposite of what we thought it did. There *is* a break of our quantum money scheme, and *as a consequence*, there’s also a quantum algorithm to find large subspaces hidden by low-degree polynomials with much better success probability than we’d thought. What happened was that first, some French algebraic cryptanalysts—Faugere, Pena, I can’t pronounce their names—used Gröbner bases to break the noiseless version of scheme, in classical polynomial time. So I thought, phew! At least I had acceded when Paul insisted that we also include a noisy version of the scheme. But later, Paul noticed that there’s a quantum reduction from the problem of breaking our noisy scheme to the problem of breaking the noiseless one, so the former is broken as well.

I’m choosing to spin this positively: “we used quantum money to discover a striking new quantum algorithm for finding subspaces hidden by low-degree polynomials. Err, yes, that’s exactly what we did.”

But, bottom line, until we manage to invent a better public-key quantum money scheme, or otherwise sort this out, I don’t think we’re entitled to claim that God put unclonability into our universe in order for quantum money to be possible.

**Copy-Protected Quantum Software**

So if not money, then what about its cousin, copy-protected software—could *that* be why No-Cloning holds? By copy-protected quantum software, I just mean a quantum state that, if you feed it into your quantum computer, lets you evaluate some Boolean function on any input of your choice, but that *doesn’t* let you efficiently prepare *more* states that let the same function be evaluated. I think this is important as one of the preeminent *evil* applications of quantum information. Why should nuclear physicists and genetic engineers get a monopoly on the evil stuff?

OK, but is copy-protected quantum software even possible? The first worry you might have is that, yeah, maybe it’s possible, but then every time you wanted to run the quantum program, you’d have to make a measurement that destroyed it. So then you’d have to go back and buy a new copy of the program for the next run, and so on. Of course, to the software company, this would presumably be a feature rather than a bug!

But as it turns out, there’s a fact many of you know—sometimes called the “Gentle Measurement Lemma,” other times the “Almost As Good As New Lemma”—which says that, as long as the outcome of your measurement on a quantum state could be predicted almost with certainty given knowledge of the state, the measurement can be implemented in such a way that it hardly damages the state at all. This tells us that, if quantum money, copy-protected quantum software, and the other things we’re talking about are possible at all, then they can also be made reusable. I summarize the principle as: “if rockets, then space shuttles.”

Much like with quantum money, one can show that, relative to a suitable oracle, it’s possible to quantumly copy-protect *any* efficiently computable function—or rather, any function that’s hard to learn from its input/output behavior. Indeed, the implementation can be not only copy-protected but also *obfuscated*, so that the user learns nothing *besides* the input/output behavior. As Bill Fefferman pointed out in his talk this morning, the No-Cloning Theorem lets us bypass Barak et al.’s famous result on the impossibility of obfuscation, because their impossibility proof assumed the ability to *copy* the obfuscated program.

Of course, what we really care about is whether quantum copy-protection is possible in the *real* world, with no oracle. I was able to give candidate implementations of quantum copy-protection for extremely special functions, like one that just checks the validity of a password. In the general case—that is, for arbitrary programs—Paul Christiano has a beautiful proposal for how to do it, which builds on our hidden-subspace money scheme. Unfortunately, since our money scheme is currently in the shop being repaired, it’s probably premature to think about the security of the much more complicated copy-protection scheme! But these are wonderful open problems, and I encourage any of you to come and scoop us. Once we know whether uncopyable quantum software is possible at all, we could then debate whether it’s the “reason” for our universe to have unclonability as a core principle.

**Unclonable Proofs and Advice**

Along the same lines, I can’t resist mentioning some favorite research directions, which some enterprising student here could totally turn into a talk at next year’s QCRYPT.

Firstly, what can we say about clonable versus unclonable quantum *proofs*—that is, QMA witness states? In other words: for which problems in QMA can we ensure that there’s an accepting witness that lets you efficiently create as many additional accepting witnesses as you want? (I mean, besides the QCMA problems, the ones that have short classical witnesses?) For which problems in QMA can we ensure that there’s an accepting witness that *doesn’t* let you efficiently create any additional accepting witnesses? I do have a few observations about these questions—ask me if you’re interested—but on the whole, I believe almost anything one can ask about them remains open.

Admittedly, it’s not clear how much *use* an unclonable proof would be. Like, imagine a quantum state that encoded a proof of the Riemann Hypothesis, and which you would keep in your bedroom, in a glass orb on your nightstand or something. And whenever you felt your doubts about the Riemann Hypothesis resurfacing, you’d take the state out of its orb and measure it again to reassure yourself of RH’s truth. You’d be like, *“my preciousssss!”* And no one else could copy your state and thereby gain the same Riemann-faith-restoring powers that you had. I dunno, I probably won’t hawk this application in a DARPA grant.

Similarly, one can ask about clonable versus unclonable *quantum advice states*—that is, initial states that are given to you to boost your computational power beyond that of an ordinary quantum computer. And that’s also a fascinating open problem.

OK, but maybe none of this quite gets at why our universe has unclonability. And this is an after-dinner talk, so do you want me to get to the *really* crazy stuff? Yes?

**Self-Referential Paradoxes**

OK! What if unclonability is our universe’s way around the paradoxes of self-reference, like the unsolvability of the halting problem and Gödel’s Incompleteness Theorem? Allow me to explain what I mean.

In kindergarten or wherever, we all learn Turing’s proof that there’s no computer program to solve the halting problem. But what isn’t usually stressed is that that proof actually does more than advertised. If someone hands you a program that they claim solves the halting problem, Turing doesn’t merely tell you that that person is wrong—rather, he shows you exactly *how* to expose the person as a jackass, by constructing an example input on which their program fails. All you do is, you take their claimed halt-decider, modify it in some simple way, and then feed the result back to the halt-decider as input. You thereby create a situation where, if your program halts given its own code as input, then it must run forever, and if it runs forever then it halts. “WHOOOOSH!” [head-exploding gesture]

OK, but now imagine that the program someone hands you, which they claim solves the halting problem, is a *quantum* program. That is, it’s a quantum state, which you measure in some basis depending on the program you’re interested in, in order to decide whether that program halts. Well, the truth is, this quantum program *still* can’t work to solve the halting problem. After all, there’s some classical program that simulates the quantum one, albeit less efficiently, and we already know that the classical program can’t work.

But now consider the question: how would you actually produce an example input on which this quantum program failed to solve the halting problem? Like, suppose the program worked on every input you tried. Then ultimately, to produce a counterexample, you might need to follow Turing’s proof and make a copy of the claimed quantum halt-decider. But then, of course, you’d run up against the No-Cloning Theorem!

So we seem to arrive at the conclusion that, while of course there’s no quantum program to solve the halting problem, there *might* be a quantum program for which no one could explicitly *refute* that it solved the halting problem, by giving a counterexample.

I was pretty excited about this observation for a day or two, until I noticed the following. Let’s suppose your quantum program that allegedly solves the halting problem has n qubits. Then it’s possible to prove that the program can’t possibly be used to compute more than, say, 2n bits of Chaitin’s constant Ω, which is the probability that a random program halts. OK, but if we had an actual oracle for the halting problem, we could use it to compute as many bits of Ω as we wanted. So, suppose I treated my quantum program *as if* it were an oracle for the halting problem, and I used it to compute the first 2n bits of Ω. Then I would *know* that, assuming the truth of quantum mechanics, the program must have made a mistake somewhere. There would still be something weird, which is that I wouldn’t know on which input my program had made an error—I would just know that it must’ve erred somewhere! With a bit of cleverness, one can narrow things down to two inputs, such that the quantum halt-decider must have erred on at least one of them. But I don’t know whether it’s possible to go further, and concentrate the wrongness on a single query.

We can play a similar game with other famous applications of self-reference. For example, suppose we use a quantum state to encode a system of axioms. Then that system of axioms will still be subject to Gödel’s Incompleteness Theorem (which I guess I believe despite the umlaut). If it’s consistent, it won’t be able to prove all the true statements of arithmetic. But we might never be able to produce an explicit example of a true statement that the axioms don’t prove. To do so we’d have to clone the state encoding the axioms and thereby violate No-Cloning.

**Personal Identity**

But since I’m a bit drunk, I should confess that all this stuff about Gödel and self-reference is just a warmup to what I *really* wanted to talk about, which is whether the No-Cloning Theorem might have anything to do with the mysteries of personal identity and “free will.” I first encountered this idea in Roger Penrose’s book, *The Emperor’s New Mind*. But I want to stress that I’m not talking here about the possibility that the brain is a quantum computer—much less about the possibility that it’s a quantum-gravitational hypercomputer that uses microtubules to solve the halting problem! I might be drunk, but I’m not *that* drunk. I also think that the Penrose-Lucas argument, based on Gödel’s Theorem, for why the brain has to work that way is fundamentally flawed.

But here I’m talking about something different. See, I have a lot of friends in the Singularity / Friendly AI movement. And I talk to them whenever I pass through the Bay Area, which is where they congregate. And many of them express great confidence that before too long—maybe in 20 or 30 years, maybe in 100 years—we’ll be able to upload ourselves to computers and live forever on the Internet (as opposed to just living 70% of our lives on the Internet, like we do today).

This would have lots of advantages. For example, any time you were about to do something dangerous, you’d just make a backup copy of yourself first. If you were struggling with a conference deadline, you’d spawn 100 temporary copies of yourself. If you wanted to visit Mars or Jupiter, you’d just email yourself there. If Trump became president, you’d not run yourself for 8 years (or maybe 80 or 800 years). And so on.

Admittedly, some awkward questions arise. For example, let’s say the hardware runs three copies of your code and takes a majority vote, just for error-correcting purposes. Does that bring three copies of you into existence, or only one copy? Or let’s say your code is run homomorphically encrypted, with the only decryption key stored in another galaxy. Does that count? Or you email yourself to Mars. If you want to make sure that you’ll wake up on Mars, is it important that you delete the copy of your code that remains on earth? Does it matter whether anyone runs the code or not? And what exactly counts as “running” it? Or my favorite one: could someone threaten you by saying, “look, I have a copy of *your* code, and if you don’t do what I say, I’m going to make a thousand copies of it and subject them all to horrible tortures?”

The issue, in all these cases, is that in a world where there could be millions of copies of your code running on different substrates in different locations—or things where it’s not even clear whether they *count* as a copy or not—we don’t have a principled way to take as input a description of the state of the universe, and then identify where in the universe *you* are—or even a probability distribution over places where you could be. And yet you seem to need such a way in order to make predictions and decisions.

A few years ago, I wrote this gigantic, post-tenure essay called The Ghost in the Quantum Turing Machine, where I tried to make the point that we don’t know at what level of granularity a brain would need to be simulated in order to duplicate someone’s subjective identity. Maybe you’d only need to go down to the level of neurons and synapses. But *if* you needed to go all the way down to the molecular level, then the No-Cloning Theorem would immediately throw a wrench into most of the paradoxes of personal identity that we discussed earlier.

For it would mean that there were some microscopic yet essential details about each of us that were fundamentally uncopyable, localized to a particular part of space. We would all, in effect, be quantumly copy-protected software. Each of us would have a core of unpredictability—not merely probabilistic unpredictability, like that of a quantum random number generator, but genuine unpredictability—that an external model of us would fail to capture completely. Of course, by having futuristic nanorobots scan our brains and so forth, it would be possible in principle to make extremely realistic copies of us. But those copies necessarily wouldn’t capture quite everything. And, one can speculate, maybe not enough for your subjective experience to “transfer over.”

Maybe the most striking aspect of this picture is that sure, you could teleport yourself to Mars—but to do so you’d need to use quantum teleportation, and as we all know, quantum teleportation necessarily destroys the original copy of the teleported state. So we’d avert this metaphysical crisis about what to do with the copy that remained on Earth.

Look—I don’t know if any of you are like me, and have ever gotten depressed by reflecting that all of your life experiences, all your joys and sorrows and loves and losses, every itch and flick of your finger, could in principle be encoded by a huge but finite string of bits, and therefore by a single positive integer. (Really? No one else gets depressed about that?) It’s kind of like: given that this integer has existed since before there was a universe, and will continue to exist after the universe has degenerated into a thin gruel of radiation, what’s the point of even going through the motions? You know?

But the No-Cloning Theorem raises the possibility that at least this integer is really *your* integer. At least it’s something that no one else knows, and no one else could know in principle, even with futuristic brain-scanning technology: you’ll always be able to surprise the world with a new digit. I don’t know if that’s true or not, but if it *were* true, then it seems like the sort of thing that would be worthy of elevating unclonability to a fundamental principle of the universe.

So as you enjoy your dinner and dessert at this historic Mayflower Hotel, I ask you to reflect on the following. People can photograph this event, they can video it, they can type up transcripts, in principle they could even record everything that happens down to the millimeter level, and post it on the Internet for posterity. But they’re not gonna get the quantum states. There’s *something* about this evening, like about every evening, that will vanish forever, so please savor it while it lasts. Thank you.

**Update (Sep. 20):** Unbeknownst to me, Marc Kaplan *did* video the event and put it up on YouTube! Click here to watch. Thanks very much to Marc! I hope you enjoy, even though of course, the video can’t precisely clone the experience of having been there.

[*Note:* The part where I raise my middle finger is an inside joke—one of the speakers during the technical sessions inadvertently did the same while making a point, causing great mirth in the audience.]

Also, Toby Ord, a philosopher I know at Oxford, points me to a neat academic paper he wrote that analyzes vote-swapping as an example of “moral trade,” and that mentions the *Porter v. Bowen* decision holding vote-swapping to be legal in the US.

Also, if we find two Gary Johnson supporters in swing states willing to trade, I’ve been contacted by a fellow Austinite who’d be happy to accept the second trade.

As regular readers might know, my first appearance in the public eye (for a loose definition of “public eye”) had nothing to do with D-Wave, Gödel’s Theorem, the computational complexity of quantum gravity, Australian printer ads, or—god forbid—social justice shaming campaigns. Instead it centered on NaderTrading: the valiant but doomed effort, in the weeks leading up to the 2000 US Presidential election, to stop George W. Bush’s rise to power by encouraging Ralph Nader supporters in swing states (such as Florida) to vote for Al Gore, while pairing themselves off over the Internet with Gore supporters in safe states (such as Texas or California) who would vote for Nader on their behalf. That way, Nader’s vote share (and his chance of reaching 5% of the popular vote, which would’ve qualified him for federal funds in 2004) wouldn’t be jeopardized, but neither would Gore’s chance of winning the election.

Here’s what I thought at the time:

- The election would be razor-close (though I never could’ve guessed
*how*close). - Bush was a malignant doofus who would be a disaster for the US and the world (though I certainly didn’t know how—recall that, at the time, Bush was running as an
*isolationist*). - Many Nader supporters, including the ones who I met at Berkeley, prioritized personal virtue so completely over real-world consequences that they might actually throw the election to Bush.

NaderTrading, as proposed by law professor Jamin Raskin and others, seemed like one of the clearest ways for nerds who knew these points, but who lacked political skills, to throw themselves onto the gears of history and do something good for the world.

So, as a 19-year-old grad student, I created a website called “In Defense of NaderTrading” (archived version), which didn’t arrange vote swaps themselves—other sites did that—but which explored some of the game theory behind the concept and answered some common objections to it. (See also here.) Within days of creating the site, I’d somehow become an “expert” on the topic, and was fielding hundreds of emails as well as requests for print, radio, and TV interviews.

Alas, the one question everyone wanted to ask me was the one that I, as a CS nerd, was the least qualified to answer: *is NaderTrading legal? isn’t it kind of like … buying and selling votes?*

I could only reply that, to my mind, NaderTrading obviously *ought* to be legal, because:

- Members of Congress and state legislatures trade votes all the time.
- A private agreement between two friends to each vote for the other’s preferred candidate seems self-evidently legal, so why should it be any different if a website is involved?
- The whole point of NaderTrading is to exercise your voting power more fully—pretty much the
*opposite*of bartering it away for private gain. - While the election laws vary by state, the ones I read very specifically banned trading votes for
*tangible goods*—they never even mentioned trading votes for other votes, even though they easily could’ve done so had legislators intended to ban that.

But—and here was the fatal problem—I could only address principles and arguments, rather than politics and power. I couldn’t honestly assure the people who wanted to vote-swap, or to set up vote-swapping sites, that they wouldn’t be prosecuted for it.

As it happened, the main vote-swapping site, voteswap2000.com, was shut down by California’s Republican attorney general, Bill Jones, only four days after it opened. A second vote-swapping site, votexchange.com, was never directly threatened but also ceased operations because of what happened to voteswap2000. Many legal scholars felt confident that these shutdowns wouldn’t hold up in court, but with just a few weeks until the election, there was no time to fight it.

Before it was shut down, voteswap2000 had brokered 5,041 vote-swaps, including hundreds in Florida. Had that and similar sites been allowed to continue operating, it’s entirely plausible that they would’ve changed the outcome of the election. No Iraq war, no 2008 financial meltdown: we would’ve been living in a different world. Note that, of the 100,000 Floridians who ultimately voted for Nader, we would’ve needed to convince fewer than 1% of them.

Today, we face something I didn’t expect to face in my lifetime: namely, a serious prospect of a takeover of the United States by a nativist demagogue with open contempt for democratic norms and legendarily poor impulse control. Meanwhile, there are two third-party candidates—Gary Johnson and Jill Stein—who together command 10% of the vote. A couple months ago, I’d expressed hopes that Johnson might help Hillary, by splitting the Republican vote. But it now looks clear that, on balance, not only Stein but also Johnson are helping Trump, by splitting up that part of the American vote that’s not driven by racial resentment.

So recently a friend—the philanthropist and rationalist Holden Karnofsky—posed a question to me: should we revive the vote-swapping idea from 2000? And presumably this time around, enhance the idea with 21^{st}-century bells and whistles like mobile apps and Facebook, to make it all the easier for Johnson/Stein supporters in swing states and Hillary supporters in safe states to find each other and trade votes?

Just like so many well-meaning people back in 2000, Holden was worried about one thing: is vote-swapping against the law? If someone created a mobile vote-swapping app, could that person be thrown in jail?

At first, I had no idea: I assumed that vote-swapping simply remained in the legal Twilight Zone where it was last spotted in 2000. But then I did something radical: *I looked it up*. And when I did, I discovered a decade-old piece of news that changes everything.

On August 6, 2007, the Ninth Circuit Court of Appeals finally ruled on a case, *Porter v. Bowen*, stemming from the California attorney general’s shutdown of voteswap2000.com. Their ruling, which is worth reading in full, was unequivocal.

Vote-swapping, it said, is protected by the First Amendment, which state election laws can’t supersede. It is fundamentally different from buying or selling votes.

Yes, the decision also granted the California attorney general immunity from prosecution, on the ground that vote-swapping’s legality hadn’t yet been established in 2000—indeed it wouldn’t be, until the Ninth Circuit’s decision itself! Nevertheless, the ruling made clear that the appellants (the creators of voteswap2000 and some others) were granted the relief they sought: namely, an assurance that vote-swapping websites would be protected from state interference in the future.

Admittedly, if vote-swapping takes off again, it’s possible that the question will be re-litigated and will end up in the Supreme Court, where the Ninth Circuit’s ruling could be reversed. For now, though, let the message be shouted from the rooftops: **a court has ruled. You cannot be punished for cooperating with your fellow citizens to vote strategically, or for helping others do the same.**

For those of you who oppose Donald Trump and who are good at web and app development: with just two months until the election, I think the time to set up some serious vote-swapping infrastructure is **right now**. Let your name be etched in history, alongside those who stood up to all the vicious demagogues of the past. And let that happen without your even needing to get up from your computer chair.

I’m not, I confess, a huge fan of either Gary Johnson *or* Jill Stein (especially not Stein). Nevertheless, here’s my promise: on November 8, I will cast my vote in the State of Texas for Gary Johnson, *if* I can find at least one Johnson supporter who lives in a swing state, who I feel I can trust, and who agrees to vote for Hillary Clinton on my behalf.

If you think you’ve got what it takes to be my vote-mate, send me an email, tell me about yourself, and let’s talk! I’m not averse to some electoral polyamory—i.e., *lots* of Johnson supporters in swing states casting their votes for Clinton, in exchange for the world’s most famous quantum complexity blogger voting for Johnson—but I’m willing to settle for a monogamous relationship if need be.

And as for Stein? I’d probably rather subsist on tofu than vote for her, because of her support for seemingly every pseudoscience she comes across, and especially because of her endorsement of the vile campaign to boycott Israel. Even so: if Stein supporters in swing states whose sincerity I trusted offered to trade votes with me, and Johnson supporters didn’t, I would bury my scruples and vote for Stein. Right now, the need to stop the madman takes precedence over everything else.

One last thing to get out of the way. When they learn of my history with NaderTrading, people keep pointing me a website called BalancedRebellion.com, and exclaiming “look! isn’t this *exactly* that vote-trading thing you were talking about?”

On examination, Balanced Rebellion turns out to be the following proposal:

- A Trump supporter in a swing state pairs off with a Hillary supporter in a swing state.
- Both of them vote for Gary Johnson, thereby helping Johnson without giving an advantage to either Hillary or Trump.

So, exercise for the reader: see if you can spot the difference between this idea and the kind of vote-swapping *I’m* talking about. (Here’s a hint: my version helps *prevent* a racist lunatic from taking command of the most powerful military on earth, rather than being neutral about that outcome.)

Not surprisingly, the “balanced rebellion” is advocated by Johnson fans.

]]>So I resolved that, anytime I’d saved up enough errata, I’d do another sackcloth-and-ashes post. Which brings us to today. Without further ado:

**I. Quantum Money Falling Down**

My and Paul Christiano’s explicit public-key quantum money scheme—the one based on low-degree polynomials—has now been fully broken. To clarify, our abstract hidden-subspace scheme—the one that uses a classical black-box to test membership in the subspaces—remains totally fine. Indeed, we unconditionally proved the security of the black-box scheme, and our security proof stands. In the paper, though, we also stuck our necks out further, and conjectured that you could instantiate the black box, by publishing random low-degree polynomials that vanish on the subspaces you want to hide. While I considered this superfluous, at Paul’s insistence, we also recommended adding completely-random “noise polynomials” for extra security.

Our scheme was broken in two stages. First, in 2014, Pena et al. broke the noiseless version of our scheme, using Gröbner-basis methods, over fields of characteristic greater than 2. Over F_{2}—the field we happened to use in our scheme—Pena et al. couldn’t quite prove that their attack worked, but they gave numerical evidence that at least it finds the subspaces in n^{O(log n)} time. Note that nothing in Pena et al.’s attack is specific to quantum money: indeed, their attack consists of a purely classical algorithm, which efficiently solves the general classical problem of recovering large subspaces from polynomials that hide them.

At that point, at least the *noisy* version of our scheme—the one Paul had insisted we include—was still standing! Indeed, the Gröbner-basis attack seemed to break down entirely when some of the polynomials were random garbage.

Later, though, Paul and Or Sattath realized that a quantum trick—basically, the single-copy tomography of Farhi et al.—can identify which polynomials are the noisy ones, provided we’re given a legitimate quantum money state to start with. As a consequence, the problem of breaking the noisy scheme can be reduced to the problem of breaking the noiseless scheme—i.e., the problem that Pena et al. already essentially solved.

As bad as this sounds, it has an interesting positive consequence. In our paper, Paul and I had actually given a security reduction for our money scheme based on low-degree polynomials. In particular, we showed that there’s no polynomial-time quantum algorithm to counterfeit our money states, *unless* there’s a polynomial-time quantum algorithm that finds a basis for a subspace S≤F_{2}^{n} of dimension n/2 with Ω(2^{-n/2}) success probability, given a collection of low-degree polynomials p_{1},…,p_{m} and q_{1},…,q_{m} (m=O(n)) most of which vanish on S and its dual subspace respectively, but that are otherwise random. So, running our reduction backwards, the only possible conclusion from the break is that there *is* such a quantum algorithm! Yet we would’ve had no idea how to find that quantum algorithm without going through quantum money—nor do we know a classical algorithm for the problem, or even a quantum algorithm with Ω(1) success probability.

In the meantime, the problem of designing a public-key quantum money scheme, with good cryptographic evidence for its security, remains open. It’s plausible that there’s some other, more secure way to instantiate my and Paul’s hidden subspace scheme, for example using lattices. And even before we’ve found such a way, we can use indistinguishability obfuscation as a stopgap. We could also seek cryptographic evidence for the security of other kinds of public-key quantum money, like Farhi et al.’s based on knot invariants.

A paper about all this is on our to-do stack. In the meantime, for further details, see Lecture 9 in my Barbados lecture notes.

**II. A De-Merlinization Mistake**

In my 2006 paper QMA/qpoly ⊆ PSPACE/poly: De-Merlinizing Quantum Protocols, the technical core of the complexity result was a new quantum information lemma that I called the “Quantum OR Bound” (Lemma 14 in the paper).

Basically, the Quantum OR Bound says that, if we have an unknown quantum state ρ, as well as a collection of measurements M_{1},…,M_{n} that we might want to make on ρ, then we can distinguish the case that (a) every M_{i} rejects ρ with overwhelming probability, from the case that (b) at least one M_{i} accepts ρ with high probability. And we can do this *despite* having only one copy of ρ, and despite the fact that earlier measurements might corrupt ρ, thereby compromising the later measurements. The intuition is simply that, if the earlier measurements corrupted ρ substantially, that could only be because some of them had a decent probability of accepting ρ, meaning that at any rate, we’re not in case (a).

I’ve since reused the Quantum OR Bound for other problems—most notably, a proof that private-key quantum money requires either a computational assumption or a huge database maintained by the bank (see Theorem 8.3.1 in my Barbados lecture notes).

Alas, Aram Harrow and Ashley Montanaro recently discovered that my proof of the Quantum OR Bound is wrong. It’s wrong because I neglected the possibility of “Zeno-like behavior,” in which repeated measurements on a quantum state would gradually shift the state far away from its starting point, without ever having a significant probability of rejecting the state. For some reason, I assumed without any adequate argument that choosing the measurements at random, rather than in a predetermined order, would solve that problem.

Now, I might actually be *right* that randomizing the measurements is enough to solve the Zeno problem! That remains a plausible conjecture, which Harrow and Montanaro could neither confirm nor refute. In the meantime, though, Harrow and Montanaro were able to recover my QMA/qpoly⊆PSPACE/poly theorem, and all the other conclusions known to follow from the Quantum OR Bound (including some new ones that they discover), by designing a *new* measurement procedure whose soundness they can prove.

Their new procedure is based on an elegant, obvious-in-retrospect idea that somehow never occurred to me. Namely, instead of just applying M_{i}‘s to ρ, one can first put a control qubit into an equal superposition of the |0〉 and |1〉 states, and then apply M_{i}‘s *conditioned* on the control qubit being in the |1〉 state. While doing this, one can periodically measure the control qubit in the {|+〉,|-〉} basis, in order to check directly whether applying the M_{i}‘s has substantially corrupted ρ. (If it hasn’t, one will always get the outcome |+〉; if it has, one might get |-〉.) Substantial corruption, if detected, then tells us that some M_{i}‘s must have had non-negligible probabilities of accepting ρ.

**III. Almost As Good As True**

One lemma that I’ve used even *more* than the Quantum OR Bound is what I’ve called the “Almost As Good As New Lemma,” and what others in the field have called the “Gentle Measurement Lemma.”

I claimed a proof of the AAGANL in my 2004 paper Limitations of Quantum Advice and One-Way Communication (Lemma 2.2 there), and have used the lemma in like half a dozen later papers. Alas, when I lectured at Barbados, Sasha Razborov and others discovered that my proof of the AAGANL was missing a crucial step! More concretely, the proof I gave there works for pure states but not for mixed states. For mixed states, the trouble is that I take a purification of the mixed state—something that always exists mathematically—but then illegally assume that the measurement I’m analyzing acts on the particular purification I’ve conjured up.

Fortunately, one can easily fix this problem by decomposing the state ρ into a mixture of pure states, then applying my earlier argument to each pure state separately, and finally using Cauchy-Schwarz (or just the convexity of the square-root function) to recombine the results. Moreover, this is exactly what other people’s proofs of the Gentle Measurement Lemma *did *do, though I’d never noticed it before Barbados—I just idly wondered why those other proofs took twice as long as mine to do the same work! For a correct proof, see Lemma 1.3.1 in the Barbados lecture notes.

**IV. Oracle Woes**

In my 2010 paper BQP and the Polynomial Hierarchy, I claimed to construct oracles A relative to which BQP⊄BPP_{path} and BQP⊄SZK, even while making only partial progress toward the big prize, which would’ve been an oracle relative to which BQP⊄PH. Not only that: I claimed to show that *any* problem with a property called “almost k-wise independence”—one example being the Forrelation (or Fourier Checking) problem that I introduced in that paper—was neither in BPP_{path} nor in SZK. But I showed that Forrelation *is* in BQP, thus yielding the separations.

Alas, this past spring Lijie Chen, who was my superb visiting student from Tsinghua University, realized that my proofs of these particular separations were wrong. Not only that, they were wrong *because I implicitly substituted a ratio of expectations for an expectation of ratios* (!). Again, it might still be *true* that almost k-wise independent problems can be neither in BPP_{path} nor in SZK: that remains an interesting conjecture, which Lijie was unable to resolve one way or the other. (On the other hand, I showed here that almost k-wise independent problems *can* be in PH.)

But never fear! In a recent arXiv preprint, Lijie has supplied correct proofs for the BQP⊄BPP_{path} and BQP⊄SZK oracle separations—using the same Forrelation problem that I studied, but additional properties of Forrelation besides its almost k-wise independence. Lijie notes that my proofs, had they worked, would also have yielded an oracle relative to which BQP⊄AM, which would’ve been a spectacular result, nontrivial progress toward BQP⊄PH. His proofs, by contrast, apply only to worst-case decision problems rather than problems of distinguishing two probability distributions, and therefore don’t imply anything about BQP vs. AM. Anyway, there’s other cool stuff in his paper too.

**V. We Needed More Coffee**

This is one I’ve already written about on this blog, but just in case anyone missed it … in my, Sean Carroll, and Lauren Ouellette’s original draft paper on the coffee automaton, the specific rule we discuss *doesn’t* generate any significant amount of complexity (in the sense of coarse-grained entropy). We wrongly thought it did, because of a misinterpretation of our simulation data. But as Brent Werness brought to our attention, not only does a corrected simulation not show any complexity bump, one can rigorously *prove* there’s no complexity bump. And we could’ve realized all this from the beginning, by reflecting that pure random diffusion (e.g., what cream does in coffee when you don’t stir it with a spoon) *doesn’t* actually produce interesting tendril patterns.

On the other hand, Brent proposed a different rule—one that involves “shearing” whole regions of cream and coffee across each other—that *does* generate significant complexity, basically because of all the long-range correlations it induces. And not only do we clearly see this in simulations, but the growth of complexity can be rigorously proven! Anyway, we have a long-delayed revision of the paper that will explain all this in more detail, with Brent as well as MIT student Varun Mohan now added as coauthors.

If any of my colleagues feel inspired to write up their own “litanies of mathematical error,” they’re welcome to do so in the comments! Just remember: you don’t earn any epistemic virtue points unless the errors you reveal *actually* embarrass you. No humblebragging about how you once left out a minus sign in your paper that won the Fields Medal.

You can see the paper right here (“Synthetic recombinase-based state machines in living cells,” by Nathaniel Roquet, Ava P. Soleimany, Alyssa C. Ferris, Scott Aaronson, and Timothy K. Lu). [**Update (Aug. 3):** The previous link takes you to a paywall, but you can now access the full text of our paper here. See also the Supplementary Material here.] You can also read the *MIT News* article (“Scientists program cells to remember and respond to series of stimuli”). In any case, *my* little part of the paper will be fully explained in this post.

A little over a year ago, two MIT synthetic biologists—Timothy Lu and his PhD student Nate Roquet—came to my office saying they had a problem they wanted help with. *Why me?* I wondered. Didn’t they realize I was a quantum complexity theorist, who so hated picking apart owl pellets and memorizing the names of cell parts in junior-high Life Science, that he avoided taking a single biology course since that time? (Not counting computational biology, taught in a CS department by Richard Karp.)

Nevertheless, I listened to my biologist guests—which turned out to be an excellent decision.

Tim and Nate told me about a DNA system with surprisingly clear rules, which led them to a strange but elegant combinatorial problem. In this post, first I need to spend some time to tell you the rules; then I can tell you the problem, and lastly its solution. There are no mathematical prerequisites for this post, and *certainly* no biology prerequisites: everything will be completely elementary, like learning a card game. Pen and paper might be helpful, though.

As we all learn in kindergarten, DNA is a finite string over the 4-symbol alphabet {A,C,G,T}. We’ll find it more useful, though, to think in terms of entire *chunks* of DNA bases, which we’ll label arbitrarily with letters like X, Y, and Z. For example, we might have X=ACT, Y=TAG, and Z=GATTACA.

We can also *invert* one of these chunks, which means writing it backwards while also swapping the A’s with T’s and the G’s with C’s. We’ll denote this operation by * (the technical name in biology is “reverse-complement”). For example:

X*=AGT, Y*=CTA, Z*=TGTAATC.

Note that (X*)*=X.

We can then combine our chunks and their inverses into a longer DNA string, like so:

ZYX*Y* = GATTACA TAG AGT CTA.

From now on, we’ll work exclusively with the chunks, and forget completely about the underlying A’s, C’s, G’s, and T’s.

Now, there are also certain special chunks of DNA bases, called *recognition sites*, which tell the little machines that read the DNA when they should start doing something and when they should stop. Recognition sites come in pairs, so we’ll label them using various parenthesis symbols like ( ), [ ], { }. To convert a parenthesis into its partner, you invert it: thus ( = )*, [ = ]*, { = }*, etc. Crucially, the parentheses in a DNA string don’t need to “face the right ways” relative to each other, and they also don’t need to nest properly. Thus, both of the following are valid DNA strings:

X ( Y [ Z [ U ) V

X { Y ] Z { U [ V

Let’s refer to X, Y, Z, etc.—the chunks that aren’t recognition sites—as *letter-chunks*. Then it will be convenient to make the following simplifying assumptions:

- Our DNA string consists of an alternating sequence of recognition sites and letter-chunks, beginning and ending with letter-chunks. (If this weren’t true, then we could just glom together adjacent recognition sites and adjacent letter-chunks, and/or add new dummy chunks, until it
*was*true.) - Every letter-chunk that appears in the DNA string appears exactly once (either inverted or not), while every recognition site that appears, appears exactly twice. Thus, if there are n distinct recognition sites, there are 2n+1 distinct letter-chunks.
- Our DNA string can be decomposed into its constituent chunks
*uniquely*—i.e., it’s always possible to tell which chunk we’re dealing with, and when one chunk stops and the next one starts. In particular, the chunks and their reverse-complements are all distinct strings.

The little machines that read the DNA string are called *recombinases*. There’s one kind of recombinase for each kind of recognition site: a (-recombinase, a [-recombinase, and so on. When, let’s say, we let a (-recombinase loose on our DNA string, it searches for (‘s and )’s and ignores everything else. Here’s what it does:

- If there are no (‘s or )’s in the string, or only one of them, it does nothing.
- If there are two (‘s facing the same way—like ( ( or ) )—it deletes everything in between them, including the (‘s themselves.
- If there are two (‘s facing opposite ways—like ( ) or ) (—it deletes the (‘s, and inverts everything in between them.

Let’s see some examples. When we apply [-recombinase to the string

A ( B [ C [ D ) E,

we get

A ( B D ) E.

When we apply (-recombinase to the same string, we get

A D* ] C* ] B* E.

When we apply *both* recombinases (in either order), we get

A D* B* E.

Another example: when we apply {-recombinase to

A { B ] C { D [ E,

we get

A D [ E.

When we apply [-recombinase to the same string, we get

A { B D* } C* E.

When we apply both recombinases—ah, but here the order matters! If we apply { first and then [, we get

A D [ E,

since the [-recombinase now encounters only a single [, and has nothing to do. On the other hand, if we apply [ first and then {, we get

A D B* C* E.

Notice that inverting a substring can change the relative orientation of two recognition sites—e.g., it can change { { into { } or vice versa. It can thereby change what happens (inversion or deletion) when some future recombinase is applied.

One final rule: after we’re done applying recombinases, we remove the remaining recognition sites like so much scaffolding, leaving only the letter-chunks. Thus, the final output

A D [ E

becomes simply A D E, and so on. Notice also that, if we happen to delete one recognition site of a given type while leaving its partner, the remaining site will *necessarily* just bounce around inertly before getting deleted at the end—so we might as well “put it out of its misery,” and delete it right away.

My coauthors have actually implemented all of this in a wet lab, which is what most of the *Science* paper is about (my part is mostly in a technical appendix). They think of what they’re doing as building a “biological state machine,” which could have applications (for example) to programming cells for medical purposes.

But without further ado, let me tell you the math question they gave me. For reasons that they can explain better than I can, my coauthors were interested in the *information storage capacity* of their biological state machine. That is, they wanted to know the answer to the following:

Suppose we have a fixed initial DNA string, with n pairs of recognition sites and 2n+1 letter-chunks; and we also have a recombinase for each type of recognition site. Then by choosing which recombinases to apply, as well as which order to apply them in, how many different DNA strings can we generate as output?

It’s easy to construct an example where the answer is as large as 2^{n}. Thus, if we consider a starting string like

A ( B ) C [ D ] E { F } G < H > I,

we can clearly make 2^{4}=16 different output strings by choosing which subset of recombinases to apply and which not. For example, applying [, {, and < (in any order) yields

A B C D* E F* G H* I.

There are also cases where the number of distinct outputs is less than 2^{n}. For example,

A ( B [ C [ D ( E

can produce only 3 outputs—A B C D E, A B D E, and A E—rather than 4.

What Tim and Nate wanted to know was: can the number of distinct outputs ever be *greater* than 2^{n}?

Intuitively, it seems like the answer “has to be” yes. After all, we already saw that the order in which recombinases are applied can matter enormously. And given n recombinases, the number of possible permutations of them is n!, not 2^{n}. (Furthermore, if we remember that any *subset* of the recombinases can be applied in any order, the number of possibilities is even a bit greater—about e·n!.)

Despite this, when my coauthors played around with examples, they found that the number of distinct output strings never exceeded 2^{n}. In other words, the number of output strings behaved *as if* the order didn’t matter, even though it does. The problem they gave me was either to explain this pattern or to find a counterexample.

I found that the pattern holds:

**Theorem:** Given an initial DNA string with n pairs of recognition sites, we can generate at most 2^{n} distinct output strings by choosing which recombinases to apply and in which order.

Let a *recombinase sequence* be an ordered list of recombinases, each occurring at most once: for example, ([{ means to apply (-recombinase, then [-recombinase, then {-recombinase.

The proof of the theorem hinges on one main definition. Given a recombinase sequence that acts on a given DNA string, let’s call the sequence *irreducible* if every recombinase in the sequence actually finds two recognition sites (and hence, inverts or deletes a nonempty substring) when it’s applied. Let’s call the sequence *reducible* otherwise. For example, given

A { B ] C { D [ E,

the sequence [{ is irreducible, but {[ is reducible, since the [-recombinase does nothing.

Clearly, for every reducible sequence, there’s a shorter sequence that produces the same output string: just omit the recombinases that don’t do anything! (On the other hand, I leave it as an exercise to show that the converse is false. That is, even if a sequence is *ir*reducible, there might be a shorter sequence that produces the same output string.)

**Key Lemma:** Given an initial DNA string, and given a subset of k recombinases, every irreducible sequence composed of all k of those recombinases produces the same output string.

Assuming the Key Lemma, let’s see why the theorem follows. Given an initial DNA string, suppose you want to specify one of its possible output strings. I claim you can do this using only n bits of information. For you just need to specify which subset of the n recombinases you want to apply, in *some* irreducible order. Since every irreducible sequence of those recombinases leads to the same output, you don’t need to specify an order on the subset. Furthermore, for each possible output string S, there must be *some* irreducible sequence that leads to S—given a reducible sequence for S, just keep deleting irrelevant recombinases until no more are left—and therefore some subset of recombinases you could pick that uniquely determines S. OK, but if you can specify each S uniquely using n bits, then there are at most 2^{n} possible S’s.

**Proof of Key Lemma.** Given an initial DNA string, let’s assume for simplicity that we’re going to apply all n of the recombinases, in some irreducible order. We claim that the final output string doesn’t depend at all on *which* irreducible order we pick.

If we can prove this claim, then the lemma follows, since given a proper subset of the recombinases, say of size k<n, we can simply glom together everything between one relevant recognition site and the next one, treating them as 2k+1 giant letter-chunks, and then repeat the argument.

Now to prove the claim. Given two letter-chunks—say A and B—let’s call them *soulmates* if either A and B or A* and B* will necessarily end up next to each other, whenever all n recombinases are applied in some irreducible order, and whenever A or B appears at all in the output string. Also, let’s call them *anti-soulmates* if either A and B* or A* and B will necessarily end up next to each other if either appears at all.

To illustrate, given the initial DNA sequence,

A [ B ( C ] D ( E,

you can check that A and C are anti-soulmates. Why? Because if we apply all the recombinases in an irreducible sequence, then at some point, the [-recombinase needs to get applied, and it needs to find both [ recognition sites. And one of these recognition sites will still be next to A, and the other will still be next to C (for what could have pried them apart? nothing). And when that happens, no matter where C has traveled in the interim, C* must get brought next to A. If the [-recombinase does an inversion, the transformation will look like

A [ … C ] → A C* …,

while if it does a deletion, the transformation will look like

A [ … [ C* → A C*

Note that C’s [ recognition site will be to its left, if and only if C has been flipped to C*. In this particular example, A never moves, but if it did, we could repeat the analysis for A and *its* [ recognition site. The conclusion would be the same: no matter what inversions or deletions we do first, we’ll maintain the invariant that A and C* (or A* and C) will immediately jump next to each other, as soon as the [ recombinase is applied. And once they’re next to each other, nothing will ever separate them.

Similarly, you can check that C and D are soulmates, connected by the ( recognition sites; D and B are anti-soulmates, connected by the [ sites; and B and E are soulmates, connected by the ( sites.

More generally, let’s consider an arbitrary DNA sequence, with n pairs of recognition sites. Then we can define a graph, called the *soulmate graph*, where the 2n+1 letter-chunks are the vertices, and where X and Y are connected by (say) a blue edge if they’re soulmates, and by a red edge if they’re anti-soulmates.

When we construct this graph, we find that every vertex has exactly 2 neighbors, one for each recognition site that borders it—save the first and last vertices, which border only one recognition site each and so have only one neighbor each. But these facts immediately determine the structure of the graph. Namely, it must consist of a simple *path*, starting at the first letter-chunk and ending at the last one, together with possibly a disjoint union of cycles.

But we know that the first and last letter-chunks can never move anywhere. For that reason, a path of soulmates and anti-soulmates, starting at the first letter-chunk and ending at the last one, *uniquely determines* the final output string, when the n recombinases are applied in any irreducible order. We just follow it along, switching between inverted and non-inverted letter-chunks whenever we encounter a red edge. The cycles contain the letter-chunks that necessarily get deleted along the way to that unique output string. This completes the proof of the lemma, and hence the theorem.

There are other results in the paper, like a generalization to the case where there can be k pairs of recognition sites of each type, rather than only one. In that case, we can prove that the number of distinct output strings is at most 2^{kn}, and that it *can* be as large as ~(2k/3e)^{n}. We don’t know the truth between those two bounds.

Why is this interesting? As I said, my coauthors had their own reasons to care, involving the number of bits one can store using a certain kind of DNA state machine. I got interested for a different reason: because this is a case where biology threw up a bunch of rules that *look* like a random mess—the parentheses don’t even need to nest correctly? inversion can *also* change the semantics of the recognition sites? evolution never thought about what happens if you delete one recognition site while leaving the other one?—and yet, on analysis, all the rules work in perfect harmony to produce a certain outcome. Change a single one of them, and the “at most 2^{n} distinct DNA sequences” theorem would be false. Mind you, I’m still not sure what biological *purpose* it serves for the rules to work in harmony this way, but they do.

But the point goes further. While working on this problem, I’d repeatedly encounter an aspect of the mathematical model that seemed weird and inexplicable—only to have Tim and Nate explain that the aspect made sense once you brought in additional facts from biology, facts not in the model they gave me. As an example, we saw that in the soulmate graph, the deleted substrings appear as cycles. But surely excised DNA fragments don’t literally form loops? Why yes, apparently, they do. As a second example, consider the DNA string

A ( B [ C ( D [ E.

When we construct the soulmate graph for this string, we get the path

A–D–C–B–E.

Yet there’s no actual recombinase sequence that leads to A D C B E as an output string! Thus, we see that it’s possible to have a “phantom output,” which the soulmate graph suggests should be reachable but that *isn’t* actually reachable. According to my coauthors, that’s because the “phantom outputs” *are* reachable, once you know that in real biology (as opposed to the mathematical model), excised DNA fragments can also reintegrate back into the long DNA string.

Many of my favorite open problems about this model concern algorithms and complexity. For example: given as input an initial DNA string, does there *exist* an irreducible order in which the recombinases can be applied? Is the “utopian string”—the string suggested by the soulmate graph—actually reachable? If it *is* reachable, then what’s the shortest sequence of recombinases that reaches it? Are these problems solvable in polynomial time? Are they NP-hard? More broadly, if we consider all the subsets of recombinases that can be applied in an irreducible order, or all the irreducible orders themselves, what combinatorial conditions do they satisfy? I don’t know—if you’d like to take a stab, feel free to share what you find in the comments!

What I do know is this: I’m fortunate that, before they publish your first biology paper, the editors at *Science* don’t call up your 7^{th}-grade Life Science teacher to ask how you did in the owl pellet unit.

**More in the comments:**

- Some notes on the generalization to k pairs of recognition sites of each type
- My coauthor Nathaniel Roquet’s comments on the biology

**Unrelated Announcement from My Friend Julia Wise (July 24):** Do you like science and care about humanity’s positive trajectory? July 25 is the final day to apply for Effective Altruism Global 2016. From August 5-7 at UC Berkeley, a network of founders, academics, policy-makers, and more will gather to apply economic and scientific thinking to the world’s most important problems. Last year featured Elon Musk and the head of Google.org. This year will be headlined by Cass Sunstein, the co-author of Nudge. If you apply with this link, the organizers will give you a free copy of Doing Good Better by Will MacAskill. Scholarships are available for those who can’t afford the cost. Read more here. Apply here.

Here’s the summary:

This mini-course will introduce participants to an exciting frontier for quantum computing theory: namely, questions involving the computational complexity of preparing a certain quantum state or applying a certain unitary transformation. Traditionally, such questions were considered in the context of the Nonabelian Hidden Subgroup Problem and quantum interactive proof systems, but they are much broader than that. One important application is the problem of “public-key quantum money” – that is, quantum states that can be authenticated by anyone, but only created or copied by a central bank – as well as related problems such as copy-protected quantum software. A second, very recent application involves the black-hole information paradox, where physicists realized that for certain conceptual puzzles in quantum gravity, they needed to know whether certain states and operations had exponential quantum circuit complexity. These two applications (quantum money and quantum gravity) even turn out to have connections to each other! A recurring theme of the course will be the quest to relate these novel problems to more traditional computational problems, so that one can say, for example, “this quantum money is hard to counterfeit if that cryptosystem is secure,” or “this state is hard to prepare if PSPACE is not in PP/poly.” Numerous open problems and research directions will be suggested, many requiring only minimal quantum background. Some previous exposure to quantum computing and information will be assumed, but a brief review will be provided.

If you still haven’t decided whether to tackle this thing: it’s basically a quantum complexity theory textbook (well, a textbook for certain *themes* within quantum complexity theory) that I’ve written and put on the Internet for free. It has explanations of lots of published results both old and new, but also some results of mine (e.g., about private-key quantum money, firewalls, and AdS/CFT) that I shamefully haven’t yet written up as papers, and that therefore aren’t currently available anywhere else. If you’re interested in certain specific topics—for example, only quantum money, or only firewalls—you should be able to skip around in the notes without too much difficulty.

Thanks *so much* to Denis Therien for organizing the mini-course, Anil Ada for managing the scribe notes effort, my PhD students Adam Bouland and Luke Schaeffer for their special guest lecture (the last one), and finally, the course attendees for their constant questions and interruptions, and (of course) for scribing.

And in case you were wondering: yes, I’ll do absolutely *anything* for science, even if it means teaching a weeklong course in Barbados! Lest you consider this a pure island boondoggle, please know that I spent probably 12-14 hours per day either lecturing (in two 3-hour installments) or preparing for the lectures, with little sleep and just occasional dips in the ocean.

And now I’m headed to the Perimeter Institute for their It from Qubit summer school, not at all unrelated to my Barbados lectures. This time, though, it’s thankfully other people’s turns to lecture…

]]>Yes, that’s correct: the Call for Papers for the 2017 Innovations in Theoretical Computer Science (ITCS) conference, to be held in Berkeley this coming January 9-11, is finally up. I attended ITCS’2015 in Rehovot, Israel and had a blast, and will attend ITCS’2017 if logistics permit.

But that’s not all: in a *Shtetl-Optimized* exclusive, the legendary Christos Papadimitriou, coauthor of the acclaimed Logicomix and ITCS’2017 program chair, has written us a guest post about what makes ITCS special and why you should come. Enjoy! –SA

**ITCS: A hidden treasure of TCS**

by Christos Papadimitriou

Conferences, for me, are a bit like demonstrations: they were fun in the 1970s. There was the Hershey STOC, of course, and that great FOCS in Providence, plus a memorable database theory gathering in Calabria. Ah, children, you should have been there…

So, even though I was a loyal supporter of the ITCS idea from the beginning – the “I”, you recall, stands for *innovation* –, I managed to miss essentially all of them – except for those that left me no excuse. For example, this year the program committee was unreasonably kind to my submissions, and so this January I was in Boston to attend.

I want to tell you about ITCS 2016, because it was a gas.

First, I saw all the talks. I cannot recall this ever happening to me before. I reconnected with fields of old, learned a ton, and got a few cool new ideas.

In fact, I believe that there was no talk with fewer than 60 people in the audience – and that’s about 70% of the attendees. In most talks it was closer to 90%. When was the last conference where you saw that?

And what is the secret of this enhanced audience attention? One explanation is that smaller conference means small auditorium. Listening to the talk no longer feels like watching a concert in a stadium, or an event on TV, it’s more like a story related by a friend. Another gimmick that works well is that, at ITCS, session chairs start the session with a 10-minute “rant,” providing context and their own view of the papers in the session.

Our field got a fresh breath of cohesion at ITCS 2016: cryptographers listened to game theorists in the presence of folks who do data structures for a living, or circuit complexity – for a moment there, the seventies were back.

Ah, those papers, their cleverness and diversity and freshness! Here is a sample of a few with a brief comment for each (take a look at the conference website for the papers and the presentations).

- What is keeping quantum computers from conquering all of NP? It is the problem with destructive measurements, right? Think again, say Aaronson, Bouland and Fitzsimons. In their paper (pdf, slides) they consider several deviations from current restrictions, including non-destructive measurements, and the space ‘just above’ BQP turns out to be a fascinating and complex place.

- Roei Tell (pdf, slides) asks another unexpected question: when is an object far from being far from having a property? On the way to an answer he discovers a rich and productive duality theory of property testing, as well as a very precise and sophisticated framework in which to explore it.

- If you want to represent the permanent of a matrix as the determinant of another matrix of linear forms in the entries, how large must this second matrix be? – an old question by Les Valiant. The innovation by Landsberg and Ressayre (pdf, slides) is that they make fantastic progress in this important problem through geometric complexity: If certain natural symmetries are to be satisfied, the answer is exponential!

(A parenthesis: The last two papers make the following important point clear: *Innovation* in ITCS is *not* meant to be the antithesis of mathematical sophistication. Deep math and methodological innovation are key ingredients of the ITCS culture.)

- When shall we find an explicit function requiring more than 3
*n*gates? In their brave exploration of new territory for circuit complexity, Golovnev and Kulikov (pdf, slides) find one possible answer: “as soon as we have explicit dispersers for quadratic varieties.”

- The student paper award went to Aviad Rubinstein for his work (pdf) on auctioning multiple items – the hardest nut in algorithmic mechanism design. He gives a PTAS for optimizing over a large – and widely used – class of “partitioning” heuristics.

Even though there were no lively discussions at the lobby during the sessions – too many folks attending, see? – the interaction was intense and enjoyable during the extra long breaks and the social events.

Plus we had the Graduating Bits night, when the youngest among us get 5 minutes to tell. I would have traveled to Cambridge just for that!

All said, ITCS 2016 was a gem of a meeting. If you skipped it, you really missed a good one.

But there is no reason to miss **ITCS 2017**, let me tell you a few things about it:

- It will be in
**Berkeley**,**January 9 -11 2017,**the week before the Barcelona SODA.

- It will take place at the Simons Institute just a few days before the boot camps on Pseudorandomness and Learning.

- I volunteered to be
**program chair**, and the steering committee has decided to try a few innovations in the submission process:

**Submission deadline is mid-September**, so you have a few more weeks to collect your most innovative thoughts. Notification before the STOC deadline.

- Authors will post a copy of their paper, and
**will submit to the committee a statement about it,**say 1000 words max. Think of it as your chance to write a favorable referee report for your own paper! Telling the committee why you think it is interesting and innovative. If you feel this is self-evident, just tell us that.

- The committee members will be the judges of the overall worth and innovative nature of the paper. Sub-reviewers are optional, and their opinion is not communicated to the rest of the committee.

- The committee may invite speakers to present specific recent interesting work. Submitted papers especially liked by the committee may be elevated to “invited.”

- Plus Graduating Bits, chair rants, social program, not to mention the Simons Institute auditorium and Berkeley in January.

You should come!

]]>