## Archive for the ‘Quantum’ Category

### Chinese BosonSampling experiment: the gloves are off

Wednesday, December 16th, 2020

Two weeks ago, I blogged about the striking claim, by the group headed by Chaoyang Lu and Jianwei Pan at USTC in China, to have achieved quantum supremacy via BosonSampling with 50-70 detected photons. I also did a four-part interview on the subject with Jonathan Tennenbaum at Asia Times, and other interviews elsewhere. None of that stopped some people, who I guess didn’t google, from writing to tell me how disappointed they were by my silence!

The reality, though, is that a lot has happened since the original announcement, so it’s way past time for an update.

I. The Quest to Spoof

Most importantly, other groups almost immediately went to work trying to refute the quantum supremacy claim, by finding some efficient classical algorithm to spoof the reported results. It’s important to understand that this is exactly how the process is supposed to work: as I’ve often stressed, a quantum supremacy claim is credible only if it’s open to the community to refute and if no one can. It’s also important to understand that, for reasons we’ll go into, there’s a decent chance that people will succeed in simulating the new experiment classically, although they haven’t yet. All parties to the discussion agree that the new experiment is, far and away, the closest any BosonSampling experiment has ever gotten to the quantum supremacy regime; the hard part is to figure out if it’s already there.

Part of me feels guilty that, as one of reviewers on the Science paper—albeit, one stressed and harried by kids and covid—it’s now clear that I didn’t exercise the amount of diligence that I could have, in searching for ways to kill the new supremacy claim. But another part of me feels that, with quantum supremacy claims, much like with proposals for new cryptographic codes, vetting can’t be the responsibility of one or two reviewers. Instead, provided the claim is serious—as this one obviously is—the only thing to do is to get the paper out, so that the entire community can then work to knock it down. Communication between authors and skeptics is also a hell of a lot faster when it doesn’t need to go through a journal’s editorial system.

Not surprisingly, one skeptic of the new quantum supremacy claim is Gil Kalai, who (despite Google’s result last year, which Gil still believes must be in error) rejects the entire possibility of quantum supremacy on quasi-metaphysical grounds. But other skeptics are current and former members of the Google team, including Sergio Boixo and John Martinis! And—pause to enjoy the irony—Gil has effectively teamed up with the Google folks on questioning the new claim. Another central figure in the vetting effort—one from whom I’ve learned much of what I know about the relevant issues over the last week—is Dutch quantum optics professor and frequent Shtetl-Optimized commenter Jelmer Renema.

Without further ado, why might the new experiment, impressive though it was, be efficiently simulable classically? A central reason for concern is photon loss: as Chaoyang Lu has now explicitly confirmed (it was implicit in the paper), up to ~70% of the photons get lost on their way through the beamsplitter network, leaving only ~30% to be detected. At least with “Fock state” BosonSampling—i.e., the original kind, the kind with single-photon inputs that Alex Arkhipov and I proposed in 2011—it seems likely to me that such a loss rate would be fatal for quantum supremacy; see for example this 2019 paper by Renema, Shchesnovich, and Garcia-Patron.

Incidentally, if anything’s become clear over the last two weeks, it’s that I, the co-inventor of BosonSampling, am no longer any sort of expert on the subject’s literature!

Anyway, one source of uncertainty regarding the photon loss issue is that, as I said in my last post, the USTC experiment implemented a 2016 variant of BosonSampling called Gaussian BosonSampling (GBS)—and Jelmer tells me that the computational complexity of GBS in the presence of losses hasn’t yet been analyzed in the relevant regime, though there’s been work aiming in that direction. A second source of uncertainty is simply that the classical simulations work in a certain limit—namely, fixing the rate of noise and then letting the numbers of photons and modes go to infinity—but any real experiment has a fixed number of photons and modes (in USTC’s case, they’re ~50 and ~100 respectively). It wouldn’t do to reject USTC’s claim via a theoretical asymptotic argument that would equally well apply to any non-error-corrected quantum supremacy demonstration!

OK, but if an efficient classical simulation of lossy GBS experiments exists, then what is it? How does it work? It turns out that we have a plausible candidate for the answer to that, originating with a 2014 paper by Gil Kalai and Guy Kindler. Given a beamsplitter network, Kalai and Kindler considered an infinite hierarchy of better and better approximations to the BosonSampling distribution for that network. Roughly speaking, at the first level (k=1), one pretends that the photons are just classical distinguishable particles. At the second level (k=2), one correctly models quantum interference involving pairs of photons, but none of the higher-order interference. At the third level (k=3), one correctly models three-photon interference, and so on until k=n (where n is the total number of photons), when one has reproduced the original BosonSampling distribution. At least when k is small, the time needed to spoof outputs at the kth level of the hierarchy should grow like nk. As theoretical computer scientists, Kalai and Kindler didn’t care whether their hierarchy produced any physically realistic kind of noise, but later work, by Shchesnovich, Renema, and others, showed that (as it happens) it does.

In its original paper, the USTC team ruled out the possibility that the first, k=1 level of this hierarchy could explain its experimental results. More recently, in response to inquiries by Sergio, Gil, Jelmer, and others, Chaoyang tells me they’ve ruled out the possibility that the k=2 level can explain their results either. We’re now eagerly awaiting the answer for larger values of k.

Let me add that I owe Gil Kalai the following public mea culpa. While his objections to QC have often struck me as unmotivated and weird, in the case at hand, Gil’s 2014 work with Kindler is clearly helping drive the scientific discussion forward. In other words, at least with BosonSampling, it turns out that Gil put his finger precisely on a key issue. He did exactly what every QC skeptic should do, and what I’ve always implored the skeptics to do.

II. BosonSampling vs. Random Circuit Sampling: A Tale of HOG and CHOG and LXEB

There’s a broader question: why should skeptics of a BosonSampling experiment even have to think about messy details like the rate of photon losses? Why shouldn’t that be solely the experimenters’ job?

To understand what I mean, consider the situation with Random Circuit Sampling, the task Google demonstrated last year with 53 qubits. There, the Google team simply collected the output samples and fed them into a benchmark that they called “Linear Cross-Entropy” (LXEB), closely related to what Lijie Chen and I called “Heavy Output Generation” (HOG) in a 2017 paper. With suitable normalization, an ideal quantum computer would achieve an LXEB score of 2, while classical random guessing would achieve an LXEB score of 1. Crucially, according to a 2019 result by me and Sam Gunn, under a plausible (albeit strong) complexity assumption, no subexponential-time classical spoofing algorithm should be able to achieve an LXEB score that’s even slightly higher than 1. In its experiment, Google reported an LXEB score of about 1.002, with a confidence interval much smaller than 0.002. Hence: quantum supremacy (subject to our computational assumption), with no further need to know anything about the sources of noise in Google’s chip! (More explicitly, Boixo, Smelyansky, and Neven did a calculation in 2017 to show that the Kalai-Kindler type of spoofing strategy definitely isn’t going to work against RCS and Linear XEB, with no computational assumption needed.)

So then why couldn’t the USTC team do something analogous with BosonSampling? Well, they tried to. They defined a measure that they called “HOG,” although it’s different from my and Lijie Chen’s HOG, more similar to a cross-entropy. Following Jelmer, let me call their measure CHOG, where the C could stand for Chinese, Chaoyang’s, or Changed. They calculated the CHOG for their experimental samples, and showed that it exceeds the CHOG that you’d get from the k=1 and k=2 levels of the Kalai-Kindler hierarchy, as well as from various other spoofing strategies, thereby ruling those out as classical explanations for their results.

The trouble is this: unlike with Random Circuit Sampling and LXEB, with BosonSampling and CHOG, we know that there are fast classical algorithms that achieve better scores than the trivial algorithm, the algorithm that just picks samples at random. That follows from Kalai and Kindler’s work, and it even more simply follows from a 2013 paper by me and Arkhipov, entitled “BosonSampling Is Far From Uniform.” Worse yet, with BosonSampling, we currently have no analogue of my 2019 result with Sam Gunn: that is, a result that would tell us (under suitable complexity assumptions) the highest possible CHOG score that we expect any efficient classical algorithm to be able to get. And since we don’t know exactly where that ceiling is, we can’t tell the experimentalists exactly what target they need to surpass in order to claim quantum supremacy. Absent such definitive guidance from us, the experimentalists are left playing whac-a-mole against this possible classical spoofing strategy, and that one, and that one.

This is an issue that I and others were aware of for years, although the new experiment has certainly underscored it. Had I understood just how serious the USTC group was about scaling up BosonSampling, and fast, I might’ve given the issue some more attention!

III. Fock vs. Gaussian BosonSampling

Above, I mentioned another complication in understanding the USTC experiment: namely, their reliance on Gaussian BosonSampling (GBS) rather than Fock BosonSampling (FBS), sometimes also called Aaronson-Arkhipov BosonSampling (AABS). Since I gave this issue short shrift in my previous post, let me make up for it now.

In FBS, the initial state consists of either 0 or 1 photons in each input mode, like so: |1,…,1,0,…,0⟩. We then pass the photons through our beamsplitter network, and measure the number of photons in each output mode. The result is that the amplitude of each possible output configuration can be expressed as the permanent of some n×n matrix, where n is the total number of photons. It was interest in the permanent, which plays a central role in classical computational complexity, that led me and Arkhipov to study BosonSampling in the first place.

The trouble is, preparing initial states like |1,…,1,0,…,0⟩ turns out to be really hard. No one has yet build a source that reliably outputs one and only one photon at exactly a specified time. This led two experimental groups to propose an idea that, in a 2013 post on this blog, I named Scattershot BosonSampling (SBS). In SBS, you get to use the more readily available “Spontaneous Parametric Down-Conversion” (SPDC) photon sources, which output superpositions over different numbers of photons, of the form $$\sum_{n=0}^{\infty} \alpha_n |n \rangle |n \rangle,$$ where αn decreases exponentially with n. You then measure the left half of each entangled pair, hope to see exactly one photon, and are guaranteed that if you do, then there’s also exactly one photon in the right half. Crucially, one can show that, if Fock BosonSampling is hard to simulate approximately using a classical computer, then the Scattershot kind must be as well.

OK, so what’s Gaussian BosonSampling? It’s simply the generalization of SBS where, instead of SPDC states, our input can be an arbitrary “Gaussian state”: for those in the know, a state that’s exponential in some quadratic polynomial in the creation operators. If there are m modes, then such a state requires ~m2 independent parameters to specify. The quantum optics people have a much easier time creating these Gaussian states than they do creating single-photon Fock states.

While the amplitudes in FBS are given by permanents of matrices (and thus, the probabilities by the absolute squares of permanents), the probabilities in GBS are given by a more complicated matrix function called the Hafnian. Roughly speaking, while the permanent counts the number of perfect matchings in a bipartite graph, the Hafnian counts the number of perfect matchings in an arbitrary graph. The permanent and the Hafnian are both #P-complete. In the USTC paper, they talk about yet another matrix function called the “Torontonian,” which was invented two years ago. I gather that the Torontonian is just the modification of the Hafnian for the situation where you only have “threshold detectors” (which decide whether one or more photons are present in a given mode), rather than “number-resolving detectors” (which count how many photons are present).

If Gaussian BosonSampling includes Scattershot BosonSampling as a special case, and if Scattershot BosonSampling is at least as hard to simulate classically as the original BosonSampling, then you might hope that GBS would also be at least as hard to simulate classically as the original BosonSampling. Alas, this doesn’t follow. Why not? Because for all we know, a random GBS instance might be a lot easier than a random SBS instance. Just because permanents can be expressed using Hafnians, doesn’t mean that a random Hafnian is as hard as a random permanent.

Nevertheless, I think it’s very likely that the sort of analysis Arkhipov and I did back in 2011 could be mirrored in the Gaussian case. I.e., instead of starting with reasonable assumptions about the distribution and hardness of random permanents, and then concluding the classical hardness of approximate BosonSampling, one would start with reasonable assumptions about the distribution and hardness of random Hafnians (or “Torontonians”), and conclude the classical hardness of approximate GBS. But this is theoretical work that remains to be done!

IV. Application to Molecular Vibronic Spectra?

In 2014, Alan Aspuru-Guzik and collaborators put out a paper that made an amazing claim: namely that, contrary to what I and others had said, BosonSampling was not an intrinsically useless model of computation, good only for refuting QC skeptics like Gil Kalai! Instead, they said, a BosonSampling device (specifically, what would later be called a GBS device) could be directly applied to solve a practical problem in quantum chemistry. This is the computation of “molecular vibronic spectra,” also known as “Franck-Condon profiles,” whatever those are.

I never understood nearly enough about chemistry to evaluate this striking proposal, but I was always a bit skeptical of it, for the following reason. Nothing in the proposal seemed to take seriously that BosonSampling is a sampling task! A chemist would typically have some specific numbers that she wants to estimate, of which these “vibronic spectra” seemed to be an example. But while it’s often convenient to estimate physical quantities via Monte Carlo sampling over simulated observations of the physical system you care about, that’s not the only way to estimate physical quantities! And worryingly, in all the other examples we’d seen where BosonSampling could be used to estimate a number, the same number could also be estimated using one of several polynomial-time classical algorithms invented by Leonid Gurvits. So why should vibronic spectra be an exception?

After an email exchange with Alex Arkhipov, Juan Miguel Arrazola, Leonardo Novo, and Raul Garcia-Patron, I believe we finally got to the bottom of it, and the answer is: vibronic spectra are not an exception.

In terms of BosonSampling, the vibronic spectra task is simply to estimate the probability histogram of some weighted sum like $$w_1 s_1 + \cdots + w_ m s_m,$$ where w1,…,wm are fixed real numbers, and (s1,…,sm) is a possible outcome of the BosonSampling experiment, si representing the number of photons observed in mode i. Alas, while it takes some work, it turns out that Gurvits’s classical algorithms can be adapted to estimate these histograms. Granted, running the actual BosonSampling experiment would provide slightly more detailed information—namely, some exact sampled values of $$w_1 s_1 + \cdots + w_ m s_m,$$ rather than merely additive approximations to the values—but since we’d still need to sort those sampled values into coarse “bins” in order to compute a histogram, it’s not clear why that additional precision would ever be of chemical interest.

This is a pity, since if the vibronic spectra application had beaten what was doable classically, then it would’ve provided not merely a first practical use for BosonSampling, but also a lovely way to verify that a BosonSampling device was working as intended.

V. Application to Finding Dense Subgraphs?

A different potential application of Gaussian BosonSampling, first suggested by the Toronto-based startup Xanadu, is finding dense subgraphs in a graph. (Or at least, providing an initial seed to classical optimization methods that search for dense subgraphs.)

This is an NP-hard problem, so to say that I was skeptical of the proposal would be a gross understatement. Nevertheless, it turns out that there is a striking observation by the Xanadu team at the core of their proposal: namely that, given a graph G and a positive even integer k, a GBS device can be used to sample a random subgraph of G of size k, with probability proportional to the square of the number of perfect matchings in that subgraph. Cool, right? And potentially even useful, especially if the number of perfect matchings could serve as a rough indicator of the subgraph’s density! Alas, Xanadu’s Juan Miguel Arrazola himself recently told me that there’s a cubic-time classical algorithm for the same sampling task, so that the possible quantum speedup that one could get from GBS in this way is at most polynomial. The search for a useful application of BosonSampling continues!

And that’s all for now! I’m grateful to all the colleagues I talked to over the last couple weeks, including Alex Arkhipov, Juan Miguel Arrazola, Sergio Boixo, Raul Garcia-Patron, Leonid Gurvits, Gil Kalai, Chaoyang Lu, John Martinis, and Jelmer Renema, while obviously taking sole responsibility for any errors in the above. I look forward to a spirited discussion in the comments, and of course I’ll post updates as I learn more!

### Shor’s algorithm in higher dimensions: Guest post by Greg Kuperberg

Monday, December 7th, 2020

Upbeat advertisement: If research in QC theory or CS theory otherwise is your thing, then wouldn’t you like to live in peaceful, quiet, bicycle-based Davis, California, and be a faculty member at the large, prestigious, friendly university known as UC Davis? In the QCQI sphere, you’d have Marina RadulaskiBruno NachtergaeleMartin FraasMukund RangamaniVeronika Hubeny, and Nick Curro as faculty colleagues, among others; and yours truly, and hopefully more people in the future. This year the UC Davis CS department has a faculty opening in quantum computing, and another faculty opening in CS theory including quantum computing. If you are interested, then time is of the essence, since the full-consideration deadline is December 15.

In this guest post, I will toot my own horn about a paper in progress (hopefully nearly finished) that goes back to the revolutionary early days of quantum computing, namely Shor’s algorithm. The takeway: I think that the strongest multidimensional generalization of Shor’s algorithm has been missed for decades. It appears to be a new algorithm that does more than the standard generalization described by Kitaev. (Scott wanted me to channel Captain Kirk and boldly go with a takeaway, so I did.)

Unlike Shor’s algorithm proper, I don’t know of any dramatic applications of this new algorithm. However, more than one quantum algorithm was discovered just because it looked interesting, and then found applications later. The input to Shor’s algorithm is a function $$f:\mathbb{Z} \to S$$, in other words a symbol-valued function $$f$$ on the integers, which is periodic with an unknown period $$p$$ and otherwise injective. In equations, $$f(x) = f(y)$$ if only if $$p$$ divides $$x-y$$. In saying that the input is a function $$f$$, I mean that Shor’s algorithm is provided with an algorithm to compute $$f$$ efficiently. Shor’s algorithm itself can then find the period $$p$$ in (quantum) polynomial time in the number of digits of $$p$$. (Not polynomial time in $$p$$, polynomial time in its logarithm.) If you’ve heard that Shor’s algorithm can factor integers, that is just one special case where $$f(x) = a^x$$ mod $$N$$, the integer to factor. In its generalized form, Shor’s algorithm is miraculous. In particular, if $$f$$ is a black-box function, then it is routine to prove that any classical algorithm to do the same thing needs exponentially many values of $$f$$, or values $$f(x)$$ where $$x$$ has exponentially many digits.

Shor’s algorithm begat the Shor-Kitaev algorithm, which does the same thing for a higher dimensional periodic function $$f:\mathbb{Z}^d \to S$$, where $$f$$ is now periodic with respect to a lattice $$L$$. The Shor-Kitaev algorithm in turn begat the hidden subgroup problem (called HSP among friends), where $$\mathbb{Z}$$ or $$\mathbb{Z}^d$$ is replaced by a group $$G$$, and now $$f$$ is $$L$$-periodic for some subgroup $$L$$. HSP varies substantially in both its computationally difficulty and its complexity status, depending on the structure of $$G$$ as well as optional restrictions on $$L$$.

A funny thing happened on the way to the forum in later work on HSP. Most of the later work has been in the special case that the ambient group $$G$$ is finite, even though $$G$$ is infinite in the famous case of Shor’s algorithm. My paper-to-be explores the hidden subgroup problem in various cases when $$G$$ is infinite. In particular, I noticed that even the case $$G = \mathbb{Z}^d$$ isn’t fully solved, because the Shor-Kitaev algorithm makes the extra assumption that $$L$$ is a maximum-rank lattice, or equivalently that $$L$$ a finite-index subgroup of $$\mathbb{Z}^d$$. As far as I know, the more general case where $$L$$ might have lower rank wasn’t treated previously. I found an extension of Shor-Kitaev to handle this case, which is I will sketch after discussing some points about HSP in general.

## Quantum algorithms for HSP

Every known quantum algorithm for HSP has the same two opening steps. First prepare an equal superposition $$|\psi_G\rangle$$ of “all” elements of the ambient group $$G$$, then apply a unitary form of the hiding function $$f$$ to get the following: $U_f|\psi_G\rangle \propto \sum_{x \in G} |x,f(x)\rangle.$ Actually, you can only do exactly this when $$G$$ is a finite group. You cannot make an equal quantum superposition on an infinite set, for the same reason that you cannot choose an integer uniformly at random from among all of the integers: It would defy the laws of probability. Since computers are finite, a realistic quantum algorithm cannot make an unequal quantum superposition on an infinite set either. However, if $$G$$ is a well-behaved infinite group, then you can approximate the same idea by making an equal superposition on a large but finite box $$B \subseteq G$$ instead: $U_f|\psi_G\rangle \propto \sum_{x \in B \subseteq G} |x,f(x)\rangle.$ Quantum algorithms for HSP now follow a third counterintuitive “step”, namely, that you should discard the output qubits that contain the value $$f(x)$$. You should take the values of $$f$$ to be incomprehensible data, encrypted for all you know. A good quantum algorithm evaluates $$f$$ too few times to interpret its output, so you might as well let it go. (By contrast, a classical algorithm is forced to dig for the only meaningful information that the output of $$f$$ to have. Namely, it has to keep searching until it finds equal values.) What remains, want what turns out to be highly valuable, is the input state in a partially measured form. I remember joking with Cris Moore about the different ways of looking at this step:

1. You can measure the output qubits.
2. The janitor can fish the output qubits out of the trash and measure them for you.
3. You can secretly not measure the output qubits and say you did.
4. You can keep the output qubits and say you threw them away.

Measuring the output qubits wins you the purely mathematical convenience that the posterior state on the input qubits is pure (a vector state) rather than mixed (a density matrix). However, since no use is made of the measured value, it truly makes no difference for the algorithm.

The final universal step for all HSP quantum algorithms is to apply a quantum Fourier transform (or QFT) to the input register and measure the resulting Fourier mode. This might seem like a creative step that may or may not be a good idea. However, if you have an efficient algorithm for the QFT for your particular group $$G$$, then you might as well do this, because (taking the interpretation that you threw away the output register) the environment already knows the Fourier mode. You can assume that this Fourier mode has been published in the New York Times, and you won’t lose anything by reading the papers.

## Fourier modes and Fourier stripes

I’ll now let $$G = \mathbb{Z}^d$$ and make things more explicit, for starters by putting arrows on elements $$\vec{x} \in \mathbb{Z}^d$$ to indicate that they are lattice vectors. The standard begining produces a superposition $$|\psi_{L+\vec{v}}\rangle$$ on a translate $$L+\vec{v}$$ of the hidden lattice $$L$$. (Again, $$L$$ is the periodicity of $$f$$.) If this state could be an equal superposition on the infinite set $$L+\vec{v}$$, and if you could do a perfect QFT on the infinite group $$\mathbb{Z}^d$$, then the resulting Fourier mode would be a randomly chosen element of a certain dual group $$L^\# \subseteq (\mathbb{R}/\mathbb{Z})^d$$ inside the torus of Fourier modes of $$\mathbb{Z}^d$$. Namely, $$L^\#$$ consists of those vectors $$\vec{y} \in (\mathbb{R}/\mathbb{Z})^d$$ whose such that the dot product $$\vec{x} \cdot \vec{y}$$ is an integer for every $$\vec{x} \in L$$. (If you expected the Fourier dual of the integers $$\mathbb{Z}$$ to be a circle $$\mathbb{R}/2\pi\mathbb{Z}$$ of length $$2\pi$$, I found it convenient here to rescale it to a circle $$\mathbb{R}/\mathbb{Z}$$ of length 1. This is often considered gauche these days, like using $$h$$ instead of $$\hbar$$ in quantum mechanics, but in context it’s okay.) In principle, you can learn $$L^\#$$ from sampling it, and then learn $$L$$ from $$L^\#$$. Happily, the unknown and irrelevant translation vector $$\vec{v}$$ is erased in this method.

In practice, it’s not so simple. As before, you cannot actually make an equal superposition on all of $$L+\vec{v}$$, but only trimmed to a box $$B \subseteq \mathbb{Z}^d$$. If you have $$q$$ qubits available for each coordinate of $$\mathbb{Z}^d$$, then $$B$$ might be a $$d$$-dimensional cube with $$Q = 2^q$$ lattice points in each direction. Following Peter Shor’s famous paper, the standard thing to do here is to identify $$B$$ with the finite group $$(\mathbb{Z}/Q)^d$$ and do the QFT there instead. This is gauche as pure mathematics, but it’s reasonable as computer science. In any case, it works, but it comes at a price. You should rescale the resulting Fourier mode $$\vec{y} \in (\mathbb{Z}/Q)^d$$ as $$\vec{y}_1 = \vec{y}/Q$$ to match it to the torus $$(\mathbb{R}/\mathbb{Z})^d$$. Even if you do that, $$\vec{y}_1$$ is not actually a uniformly random element of $$L^\#$$, but rather a noisy, discretized approximation of one.

In Shor’s algorithm, the remaining work is often interpreted as the post-climax. In this case $$L = p\mathbb{Z}$$, where $$p$$ is the hidden period of $$f$$, and $$L^\#$$ consists of the multiples of $$1/p$$ in $$\mathbb{R}/\mathbb{Z}$$. The Fourier mode $$y_1$$ (skipping the arrow since we are in one dimension) is an approximation to some fraction $$r/p$$ with roughly $$q$$ binary digits of precision. ($$y_1$$ is often but not always the very best binary approximation to $$r/p$$ with the available precision.) If you have enough precision, you can learn a fraction from its digits, either in base 2 or in any base. For instance, if I’m thinking of a fraction that is approximately 0.2857, then 2/7 is much closer than any other fraction with a one-digit denominator. As many people know, and as Shor explained in his paper, continued fractions are an efficient and optimal algorithm for this in larger cases.

The Shor-Kitaev algorithm works the same way. You can denoise each coordinate of each Fourier example $$\vec{y}_1$$ with the continued fraction algorithm to obtain an exact element $$\vec{y}_0 \in L^\#$$. You can learn $$L^\#$$ with a polynomial number of samples, and then learn $$L$$ from that with integer linear algebra. However, this approach can only work if $$L^\#$$ is a finite group, or equivalently when $$L$$ has maximum rank $$d$$. This condition is explicitly stated in Kitaev’s paper, and in most but not all of the papers and books that cite this algorithm. if $$L$$ has maximum rank, then the picture in Fourier space looks like this:

However, if $$L$$ has rank $$\ell < d$$, then $$L^\#$$ is a pattern of $$(k-\ell)$$-dimensional stripes, like this instead:

In this case, as the picture indicates, each coordinate of $$\vec{y}_1$$ is flat random and individually irreparable. If you knew the direction of the stripes, then you use could define a slanted coordinate system where some of the coordinates of $$\vec{y}_1$$ could be repaired. But the tangent directions of $$L^\#$$ essentially beg the question. They are the orthogonal space of $$L_\mathbb{R}$$, the vector space subtended by the hidden subgroup $$L$$. If you know $$L_\mathbb{R}$$, then you can find $$L$$ by running Shor-Kitaev in the lattice $$L_\mathbb{R} \cap \mathbb{Z}^d$$.

My solution to this conundrum is to observe that the multiples of a randomly chosen point $$\vec{y}_0$$ in $$L^\#$$ have a good chance of filling out $$L^\#$$ adequately well, in particular to land near $$\vec{0}$$ often enough to reveal the tangent directions of $$L^\#$$. You have to make do with a noisy sample $$\vec{y}_1$$ instead, but by making the QFT radix $$Q$$ large enough, you can reduce the noise well enough for this to work. Still, even if you know that these small, high-quality multiples of $$\vec{y}_1$$ exist, they are needles in an exponential haystack of bad multiples, so how do you find them? It turns out that the versatile LLL algorithm, which finds a basis of short vectors in a lattice, can be used here. The multiples of $$\vec{y}_0$$ (say, for simplicity) aren’t a lattice, they are a dense orbit in $$L^\#$$ or part of it. However, they are a shadow of a lattice one dimension higher, that you can supply to the LLL algorithm. This step produces lets you compute the linear span $$L_\mathbb{R}$$ of $$L$$ from its perpendicular space, and then as mentioned you can use Shor-Kitaev to learn the exact geometry of $$L$$.

### Quantum supremacy, now with BosonSampling

Thursday, December 3rd, 2020

Update (12/5): The Google team, along with Gil Kalai, have raised questions about whether the results of the new BosonSampling experiment might be easier to spoof classically than the USTC team thought they were, because of a crucial difference between BosonSampling and qubit-based random circuit sampling. Namely, with random circuit sampling, the marginal distribution over any k output qubits (for small k) is exponentially close to the uniform distribution. With BosonSampling, by contrast, the marginal distribution over k output modes is distinguishable from uniform, as Arkhipov and I noted in a 2013 followup paper. On the one hand, these easily-detected nonuniformities provide a quick, useful sanity check for whether BosonSampling is being done correctly. On the other hand, they might also give classical spoofing algorithms more of a toehold. The question is whether, by spoofing the k-mode marginals, a classical algorithm could also achieve scores on the relevant “HOG” (Heavy Output Generation) benchmark that are comparable to what the USTC team reported.

One way or the other, this question should be resolvable by looking at the data that’s already been collected, and we’re trying now to get to the bottom of it. And having failed to flag this potential issue when I reviewed the paper, I felt a moral obligation at least to let my readers know about it as soon as I did. If nothing else, this is an answer to those who claim this stuff is all obvious. Please pardon the science underway!

A group led by Jianwei Pan and Chao-Yang Lu, based mainly at USTC in Hefei, China, announced today that it achieved BosonSampling with 40-70 detected photons—up to and beyond the limit where a classical supercomputer could feasibly verify the results. (Technically, they achieved a variant called Gaussian BosonSampling: a generalization of what I called Scattershot BosonSampling in a 2013 post on this blog.)

For more, see also Emily Conover’s piece in Science News, or Daniel Garisto’s in Scientific American, both of which I consulted on. (Full disclosure: I was one of the reviewers for the Pan group’s Science paper, and will be writing the Perspective article to accompany it.)

The new result follows the announcement of 14-photon BosonSampling by the same group a year ago. It represents the second time quantum supremacy has been reported, following Google’s celebrated announcement from last year, and the first time it’s been done using photonics rather than superconducting qubits.

For anyone who regards it as boring or obvious, here and here is Gil Kalai, on this blog, telling me why BosonSampling would never scale beyond 8-10 photons. (He wrote that, if aliens forced us to try, then much like with the Ramsey number R(6,6), our only hope would be to attack the aliens.) Here’s Kalai making a similar prediction, on the impossibility of quantum supremacy by BosonSampling or any other means, in his plenary address to the International Congress of Mathematicians two years ago.

Even if we set aside the quantum computing skeptics, many colleagues told me they thought experimental BosonSampling was a dead end, because of photon losses and the staggering difficulty of synchronizing 50-100 single-photon sources. They said that a convincing demonstration of quantum supremacy would have to await the arrival of quantum fault-tolerance—or at any rate, some hardware platform more robust than photonics. I always agreed that they might be right. Furthermore, even if 50-photon BosonSampling was possible, after Google reached the supremacy milestone first with superconducting qubits, it wasn’t clear if anyone would still bother. Even when I learned a year ago about the USTC group’s intention to go for it, I was skeptical, figuring I’d believe it when I saw it.

Obviously the new result isn’t dispositive. Nevertheless, as someone whose intellectual origins are close to pure math, it’s strange and exciting to find myself in a field where, once in a while, the world itself gets to weigh in on a theoretical disagreement.

What is BosonSampling? You must be new here! Briefly, it’s a proposal for achieving quantum supremacy by simply passing identical, non-interacting photons through an array of beamsplitters, and then measuring where they end up. For more: in increasing order of difficulty, here’s an MIT News article from back in 2011, here’s the Wikipedia page, here are my PowerPoint slides, here are my lecture notes from Rio de Janeiro, and here’s my original paper with Arkhipov.

What is quantum supremacy? Roughly, the use of a programmable or configurable quantum computer to solve some well-defined computational problem much faster than we know how to solve it with any existing classical computer. “Quantum supremacy,” a term coined by John Preskill in 2012, does not mean useful QC, or scalable QC, or fault-tolerant QC, all of which remain outstanding challenges. For more, see my Supreme Quantum Supremacy FAQ, or (e.g.) my recent Lytle Lecture for the University of Washington.

If Google already announced quantum supremacy a year ago, what’s the point of this new experiment? To me, at least, quantum supremacy seems important enough to do at least twice! Also, as I said, this represents the first demonstration that quantum supremacy is possible via photonics. Finally, as the authors point out, the new experiment has one big technical advantage compared to Google’s: namely, many more possible output states (~1030 of them, rather than a mere ~9 quadrillion). This makes it infeasible to calculate the whole probability distribution over outputs and store it on a gigantic hard disk (after which one could easily generate as many samples as one wanted), which is what IBM proposed doing in its response to Google’s announcement.

Is BosonSampling a form of universal quantum computing? No, we don’t even think it can simulate universal classical computing! It’s designed for exactly one task: namely, demonstrating quantum supremacy and refuting Gil Kalai. It might have some other applications besides that, but if so, they’ll be icing on the cake. This is in contrast to Google’s Sycamore processor, which in principle is a universal quantum computer, just with a severe limit on the number of qubits (53) and how many layers of gates one can apply to them (about 20).

Is BosonSampling at least a step toward universal quantum computing? I think so! In 2000, Knill, Laflamme, and Milburn (KLM) famously showed that pure, non-interacting photons, passing through a network of beamsplitters, are capable of universal QC, provided we assume one extra thing: namely, the ability to measure the photons at intermediate times, and change which beamsplitters to apply to the remaining photons depending on the outcome. In other words, “BosonSampling plus adaptive measurements equals universality.” Basically, KLM is the holy grail that experimental optics groups around the world have been working toward for 20 years, with BosonSampling just a more achievable pit stop along the way.

Are there any applications of BosonSampling? We don’t know yet. There are proposals in the literature to apply BosonSampling to vibronic spectra in quantum chemistry, finding dense subgraphs, and other problems, but I’m not yet sure whether these proposals will yield real speedups over the best we can do with classical computers, for a task of practical interest that involves estimating specific numbers (as opposed to sampling tasks, where BosonSampling almost certainly does yield exponential speedups, but which are rarely the thing practitioners directly care about). [See this comment for further discussion of the issues regarding dense subgraphs.] In a completely different direction, one could try to use BosonSampling to generate cryptographically certified random bits, along the lines of my proposal from 2018, much like one could with qubit-based quantum circuits.

How hard is it to simulate BosonSampling on a classical computer? As far as we know today, the difficulty of simulating a “generic” BosonSampling experiment increases roughly like 2n, where n is the number of detected photons. It might be easier than that, particularly when noise and imperfections are taken into account; and at any rate it might be easier to spoof the statistical tests that one applies to verify the outputs. I and others managed to give some theoretical evidence against those possibilities, but just like with Google’s experiment, it’s conceivable that some future breakthrough will change the outlook and remove the case for quantum supremacy.

Do you have any amusing stories? When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there. This was by far the most expensive referee report I ever wrote! Also: when Covid first started, and facemasks were plentiful in China but almost impossible to get in the US, Chao-Yang Lu, one of the leaders of the new work and my sometime correspondent on the theory of BosonSampling, decided to mail me a box of 200 masks (I didn’t ask for it). I don’t think that influenced my later review, but it was appreciated nonetheless. Huge congratulations to the whole team for their accomplishment! ### Happy Thanksgiving Y’All! Wednesday, November 25th, 2020 While a lot of pain is still ahead, this year I’m thankful that a dark chapter in American history might be finally drawing to a close. I’m thankful that the mRNA vaccines actually work. I’m thankful that my family has remained safe, and I’m thankful for all the essential workers who’ve kept our civilization running. A few things: 1. Friend-of-the-blog Jelani Nelson asked me to advertise an important questionnaire for theoretical computer scientists, about what the future of STOC and FOCS should look like (for example, should they become all virtual?). It only takes 2 or 3 minutes to fill out (I just did). 2. Here’s a podcast that I recently did with UT Austin undergraduate Dwarkesh Patel. (As usual, I recommend 2x speed to compensate for my verbal tics.) 3. Feel free to use the comments on this post to talk about recent progress in quantum computing or computational complexity! Like, I dunno, a (sub)exponential black-box speedup for the adiabatic algorithm, or anti-concentration for log-depth random quantum circuits, or an improved shadow tomography procedure, or a quantum algorithm for nonlinear differential equations, or a barrier to proving strong 3-party parallel repetition, or equivalence of one-way functions and time-bounded Kolmogorov complexity, or turning any hard-on-average NP problem into one that’s guaranteed to have solutions. 4. It’s funny how quantum computing, P vs. NP, and so forth can come to feel like just an utterly mundane day job, not something anyone outside a small circle could possibly want to talk about while the fate of civilization hangs in the balance. Sometimes it takes my readers to remind me that not only are these topics what brought most of you here in the first place, they’re also awesome! So, I’ll mark that down as one more thing to be thankful for. ### Annual post: Come join UT Austin’s Quantum Information Center! Wednesday, November 18th, 2020 Hook ’em Hadamards! If you’re a prospective PhD student: Apply here for the CS department (the deadline this year is December 15th), here for the physics department (the deadline is December 1st), or here for the ECE department (the deadline is 15th). GREs are not required this year because of covid. If you apply to CS and specify that you want to work with me, I’ll be sure to see your application. If you apply to physics or ECE, I won’t see your application, but once you arrive, I can sometimes supervise or co-supervise PhD students in other departments (or, of course, serve on their committees). In any case, everyone in the UT community is extremely welcome at our quantum information group meetings (which are now on Zoom, naturally, but depending on vaccine distribution, hopefully won’t be by the time you arrive!). Emailing me won’t make a difference. Admissions are very competitive, so apply broadly to maximize your chances. If you’re a prospective postdoctoral fellow: By January 1, 2021, please email me a cover letter, your CV, and two or three of your best papers (links or attachments). Please also ask two recommenders to email me their letters by January 1. While my own work tends toward computational complexity, I’m open to all parts of theoretical quantum computing and information. If you’re a prospective faculty member: Yes, faculty searches are still happening despite covid! Go here to apply for an opening in the CS department (which, in quantum computing, currently includes me and MIP*=RE superstar John Wright), or here to apply to the physics department (which, in quantum computing, currently includes Drew Potter, along with a world-class condensed matter group). ### My second podcast with Lex Fridman Monday, October 12th, 2020 Here it is—enjoy! (I strongly recommend listening at 2x speed.) We recorded it a month ago—outdoors (for obvious covid reasons), on a covered balcony in Austin, as it drizzled all around us. Topics included: • Whether the universe is a simulation • Eugene Goostman, GPT-3, the Turing Test, and consciousness • Why I disagree with Integrated Information Theory • Why I disagree with Penrose’s ideas about physics and the mind • Intro to complexity theory, including P, NP, PSPACE, BQP, and SZK • The US’s catastrophic failure on covid • The importance of the election • My objections to cancel culture • The role of love in my life (!) Thanks so much to Lex for his characteristically probing questions, apologies as always for my verbal tics, and here’s our first podcast for those who missed that one. ### My Utility+ podcast with Matthew Putman Thursday, September 3rd, 2020 Another Update (Sep. 15): Sorry for the long delay; new post coming soon! To tide you over—or just to distract you from the darkness figuratively and literally engulfing our civilization—here’s a Fortune article about today’s announcement by IBM of its plans for the next few years in superconducting quantum computing, with some remarks from yours truly. Another Update (Sep. 8): A reader wrote to let me know about a fundraiser for Denys Smirnov, a 2015 IMO gold medalist from Ukraine who needs an expensive bone marrow transplant to survive Hodgkin’s lymphoma. I just donated and I hope you’ll consider it too! Update (Sep. 5): Here’s another quantum computing podcast I did, “Dunc Tank” with Duncan Gammie. Enjoy! Thanks so much to Shtetl-Optimized readers, so far we’ve raised$1,371 for the Biden-Harris campaign and $225 for the Lincoln Project, which I intend to match for$3,192 total. If you’d like to donate by tonight (Thursday night), there’s still $404 to go! Meanwhile, a mere three days after declaring my “new motto,” I’ve come up with a new new motto for this blog, hopefully a more cheerful one: When civilization seems on the brink of collapse, sometimes there’s nothing left to talk about but maximal separations between randomized and quantum query complexity. On that note, please enjoy my new one-hour podcast on Spotify (if that link doesn’t work, try this one) with Matthew Putman of Utility+. Alas, my umming and ahhing were more frequent than I now aim for, but that’s partly compensated for by Matthew’s excellent decision to speed up the audio. This was an unusually wide-ranging interview, covering everything from SlateStarCodex to quantum gravity to interdisciplinary conferences to the challenges of teaching quantum computing to 7-year-olds. I hope you like it! ### Is this blog obsolete? Monday, July 20th, 2020 Longtime friend-of-the-blog Michael Nielsen, formerly a quantum computing theorist and now an independent thinker about science and AI and learning, posted the following exchange with the astounding GPT-3 text generator / chatbot: Q: Is it possible to use quantum computers to solve NP-complete problems in polynomial time? A: It is not possible to use quantum computers to solve NP-complete problems in polynomial time. Q: Why not? A: Because NP-complete problems are not in BQP. Q: You’ve just restated your earlier assertion! What I’d like is to understand the underlying reason why not! A: I’m sorry, but I don’t have time to explain the underlying reason why not. Q: No offence, but this seems really bogus. I don’t believe you understand the reason why! A: I’m sorry, but I don’t have time to explain the underlying reason why not. (And then there are a few more rounds of the AI not having time to “explain the underlying reason why not.”) This is, like, 80% of the way to replacing Shtetl-Optimized! For much more discussion of GPT-3 and its implications, and samples of its output, see for example the SSC subreddit. At the moment, as far as I can tell, the closest a person off the street can easily come to experimenting with GPT-3 themselves is using a website called AI Dungeon. And yes, as many have already remarked, this is clearly the MITS Altair of text-generating AI, an amusing toy that’s also the start of something that will change the world. ### David Poulin Monday, June 29th, 2020 2020 sucks. Yesterday I learned that David Poulin, a creative and widely-beloved quantum computing and information theorist, has died at age 43, of an aggressive brain cancer. After studying under many of the field’s legends—Gilles Brassard, Wojciech Zurek, Ray Laflamme, Gerard Milburn, John Preskill—David became a professor at the University of Sherbrooke in Quebec. There he played a leading role in CIFAR (the Canadian Institute For Advanced Research), eventually co-directing its quantum information science program with Aephraim Steinberg. Just this fall (!), David moved to Microsoft Research to start a new phase of his career. He’s survived by a large family. While I can’t claim any deep knowledge of David’s work—he and I pursued very different problems—it seems appropriate to mention some of his best-known contributions. With David Kribs, Ray Laflamme, and Maia Lesosky, he introduced the formalism of operator quantum error correction, and made many other contributions to the theory of quantum error-correction and fault-tolerance (including the estimation of thresholds). He and coauthors showed in a Nature paper how to do quantum state tomography on 1D matrix product states efficiently. With Pavithran Iyer, he proved that optimal decoding of stabilizer codes is #P-hard. And if none of that makes a sufficient impression on Shtetl-Optimized readers: well, back in 2013, when D-Wave was claiming to have achieved huge quantum speedups, David Poulin was one of the few experts willing to take a clear skeptical stance in public (including right in my comment section—see here for example). I vividly remember being officemates with David back in 2003, at the Perimeter Institute in Waterloo—before Perimeter had its sleek black building, when it still operated out of a converted tavern. (My and David’s office was in the basement, reached via a narrow staircase.) David liked to tease me: for example, if I found him in conversation with someone else and asked what it was about, he’d say, “oh, nothing to do with computational efficiency, no reason for you to care.” (And yet, much of David’s work ultimately would have to do with computational efficiency.) David was taken way too soon and will be missed by everyone who knew him. Feel free to share David stories in the comments. ### Quantum Computing Since Democritus: New Foreword! Saturday, June 20th, 2020 Time for a non-depressing post. Quantum Computing Since Democritus, which is already available in English and Russian, is about to be published in both Chinese and Japanese. (So if you read this blog, but have avoided tackling QCSD because your Chinese or Japanese is better than your English, today’s your day!) To go along with the new editions, Cambridge University Press asked me to write a new foreword, reflecting on what happened in the seven years since the book was published. The editor, Paul Dobson, kindly gave me permission to share the new foreword on my blog. So without further ado… Quantum Computing Since Democritus began its life as a course that I taught at the University of Waterloo in 2006. Seven years later, it became the book that you now hold. Its preface ended with the following words: Here’s hoping that, in 2020, this book will be as badly in need of revision as the 2006 lecture notes were in 2013. As I write this, in June 2020, a lot has happened that I would never have predicted in 2013. Donald Trump is the President of the United States, and is up for reelection shortly. This is not a political book, so let me resist the urge to comment further. Meanwhile, the coronavirus pandemic is ravaging the world, killing hundreds of thousands of people, crashing economies, and shutting down schools and universities (including mine). And in the past few weeks, protests against racism and police brutality started in America and then spread to the world, despite the danger of protesting during a pandemic. Leaving aside the state of the world, my own life is also very different than it was seven years ago. Along with my family, I’ve moved from MIT to the University of Texas in Austin. My daughter, who was born at almost exactly the same time as Quantum Computing Since Democritus, is now a first-grader, and is joined by a 3-year-old son. When my daughter’s school shut down due to the coronavirus, I began home-schooling her in math, computer science, and physics—in some of the exact same topics covered in this book. I’m now engaged in an experiment to see what portion of this material can be made accessible to a 7-year-old. But what about the material itself? How has it held up over seven years? Both the bad news and the (for you) good news, I suppose, is that it’s not particularly out of date. The intellectual underpinnings of quantum computing and its surrounding disciplines remain largely as they were. Still, let me discuss what has changed. Between 2013 and 2020, the field of quantum computing made a striking transition, from a mostly academic pursuit to a major technological arms race. The Chinese government, the US government, and the European Union have all pledged billions of dollars for quantum computing research. Google, Microsoft, IBM, Amazon, Alibaba, Intel, and Honeywell also now all have well-funded groups tasked with building quantum computers, or providing quantum-computing-related software and services, or even just doing classical computing that’s “quantum-inspired.” These giants are joined by dozens of startups focused entirely on quantum computing. The new efforts vary greatly in caliber; some efforts seem rooted in visions of what quantum computers will be able to help with, and how soon, that I find to be wildly overoptimistic or even irresponsible. But perhaps it’s always this way when a new technology moves from an intellectual aspiration to a commercial prospect. Having joined the field around 1999, before there were any commercial efforts in quantum computing, I’ve found the change disorienting. But while some of the new excitement is based on pure hype—on marketers now mixing some “quantum” into their word-salad of “blockchain,” “deep learning,” etc., with no particular understanding of any of the ingredients—there really have been some scientific advances in quantum computing since 2013, a fire underneath the smoke. Surely the crowning achievement of quantum computing during this period was the achievement of “quantum supremacy,” which a team at Google announced in the fall of 2019. For the first time, a programmable quantum computer was used to outperform any classical computer on earth, running any currently known algorithm. Google’s device, called “Sycamore,” with 53 superconducting qubits cooled to a hundredth of a degree above absolute zero, solved a well-defined albeit probably useless sampling problem in about 3 minutes. To compare, current state-of-the-art simulations on classical computers need a few days, even with hundreds of thousands of parallel processors. Ah, but will a better classical simulation be possible? That’s an open question in quantum complexity! The discussion of that question draws on theoretical work that various colleagues and I did over the past decade. That work in turn draws on my so-called PostBQP=PP theorem from 2004, explained in this book. In the past seven years, there were also several breakthroughs in quantum computing theory—some of which resolved open problems mentioned in this book. In 2018, Ran Raz and Avishay Tal gave an oracle relative to which BQP (Bounded-Error Quantum Polynomial-Time) is not contained in PH (the Polynomial Hierarchy). This solved one of the main open questions, since 1993, about where BQP fits in with classical complexity classes, at least in the black-box setting. (What does that mean? Read the book!) Raz and Tal’s proof used a candidate problem that I had defined in 2009 and called “Forrelation.” Also in 2018, Urmila Mahadev gave a protocol, based on cryptography, by which a polynomial-time quantum computer (i.e., a BQP machine) could always prove the results of its computation to a classical polynomial-time skeptic, purely by exchanging classical messages with the skeptic. Following Urmila’s achievement, I was delighted to give her a$25 prize for solving the problem that I’d announced on my blog back in 2007.

Perhaps most spectacularly of all, in 2020, Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen proved that MIP*=RE.  Here MIP* means the class of problems solvable using multi-prover interactive proof systems with quantumly entangled provers (and classical polynomial-time verifiers), while RE means Recursively Enumerable: a class that includes not only all the computable problems, but even the infamous halting problem (!).  To say it more simply, entangled provers can convince a polynomial-time verifier that an arbitrary Turing machine halts.  Besides its intrinsic interest, a byproduct of this breakthrough was to answer a decades-old question in pure math, the so-called Connes Embedding Conjecture (by refuting the conjecture).  To my knowledge, the new result represents the first time that quantum computing has reached “all the way up the ladder of hardness” to touch uncomputable problems.  It’s also the first time that non-relativizing techniques, like the ones central to the study of interactive proofs, were ever used in computability theory.

In a different direction, the last seven years have witnessed an astonishing convergence between quantum information and quantum gravity—something that was just starting when Quantum Computing Since Democritus appeared in 2013, and that I mentioned as an exciting new direction.  Since then, the so-called “It from Qubit” collaboration has brought together quantum computing theorists with string theorists and former string theorists—experts in things like the black hole information problem—to develop a shared language.  One striking proposal that’s emerged from this is a fundamental role for quantum circuit complexity—that is, the smallest number of 1- and 2-qubit gates needed to prepare a given n-qubit state from the all-0 state—in the so-called AdS/CFT (Anti de Sitter / Conformal Field Theory) correspondence.  AdS/CFT is a duality between physical theories involving different numbers of spatial dimensions; for more than twenty years, it’s been a central testbed for ideas about quantum gravity.  But the duality is extremely nonlocal: a “simple” quantity in the AdS theory, like the volume of a wormhole, can correspond to an incredibly “complicated” quantity in the dual CFT.  The new proposal is that the CFT quantity might be not just complicated, but literally circuit complexity itself.  Fanciful as that sounds, the truth is that no one has come up with any other proposal that passes the same sanity checks.  A related new insight is that the nonlocal mapping between the AdS and CFT theories is not merely analogous to, but literally an example of, a quantum error-correcting code: the same mathematical objects that will be needed to build scalable quantum computers.

When Quantum Computing Since Democritus was first published, some people thought it went too far in elevating computer science, and computational complexity in particular, to fundamental roles in understanding the physical world.  But even I wasn’t audacious enough to posit connections like the ones above, which are now more-or-less mainstream in quantum gravity research.

I’m proud that I wrote Quantum Computing Since Democritus, but as the years go by, I find that I have no particular desire to revise it, or even reread it.  It seems far better for the book to stand as a record of what I knew and believed and cared about at a certain moment in time.

The intellectual quest that’s defined my life—the quest to wrap together computation, physics, math, and philosophy into some sort of coherent picture of the world—might never end.  But it does need to start somewhere.  I’m honored that you chose Quantum Computing Since Democritus as a place to start or continue your own quest.  I hope you enjoy it.

Scott Aaronson
Austin, Texas
June 2020