Chinese BosonSampling: The gloves are off

December 16th, 2020

Two weeks ago, I blogged about the striking claim, by the group headed by Chaoyang Lu and Jianwei Pan at USTC in China, to have achieved quantum supremacy via BosonSampling with 50-70 detected photons. I also did a four-part interview on the subject with Jonathan Tennenbaum at Asia Times, and other interviews elsewhere. None of that stopped some people, who I guess didn’t google, from writing to tell me how disappointed they were by my silence!

The reality, though, is that a lot has happened since the original announcement, so it’s way past time for an update.

I. The Quest to Spoof

Most importantly, other groups almost immediately went to work trying to refute the quantum supremacy claim, by finding some efficient classical algorithm to spoof the reported results. It’s important to understand that this is exactly how the process is supposed to work: as I’ve often stressed, a quantum supremacy claim is credible only if it’s open to the community to refute and if no one can. It’s also important to understand that, for reasons we’ll go into, there’s a decent chance that people will succeed in simulating the new experiment classically, although they haven’t yet. All parties to the discussion agree that the new experiment is, far and away, the closest any BosonSampling experiment has ever gotten to the quantum supremacy regime; the hard part is to figure out if it’s already there.

Part of me feels guilty that, as one of reviewers on the Science paper—albeit, one stressed and harried by kids and covid—it’s now clear that I didn’t exercise the amount of diligence that I could have, in searching for ways to kill the new supremacy claim. But another part of me feels that, with quantum supremacy claims, much like with proposals for new cryptographic codes, vetting can’t be the responsibility of one or two reviewers. Instead, provided the claim is serious—as this one obviously is—the only thing to do is to get the paper out, so that the entire community can then work to knock it down. Communication between authors and skeptics is also a hell of a lot faster when it doesn’t need to go through a journal’s editorial system.

Not surprisingly, one skeptic of the new quantum supremacy claim is Gil Kalai, who (despite Google’s result last year, which Gil still believes must be in error) rejects the entire possibility of quantum supremacy on quasi-metaphysical grounds. But other skeptics are current and former members of the Google team, including Sergio Boixo and John Martinis! And—pause to enjoy the irony—Gil has effectively teamed up with the Google folks on questioning the new claim. Another central figure in the vetting effort—one from whom I’ve learned much of what I know about the relevant issues over the last week—is Dutch quantum optics professor and frequent Shtetl-Optimized commenter Jelmer Renema.

Without further ado, why might the new experiment, impressive though it was, be efficiently simulable classically? A central reason for concern is photon loss: as Chaoyang Lu has now explicitly confirmed (it was implicit in the paper), up to ~70% of the photons get lost on their way through the beamsplitter network, leaving only ~30% to be detected. At least with “Fock state” BosonSampling—i.e., the original kind, the kind with single-photon inputs that Alex Arkhipov and I proposed in 2011—it seems likely to me that such a loss rate would be fatal for quantum supremacy; see for example this 2019 paper by Renema, Shchesnovich, and Garcia-Patron.

Incidentally, if anything’s become clear over the last two weeks, it’s that I, the co-inventor of BosonSampling, am no longer any sort of expert on the subject’s literature!

Anyway, one source of uncertainty regarding the photon loss issue is that, as I said in my last post, the USTC experiment implemented a 2016 variant of BosonSampling called Gaussian BosonSampling (GBS)—and Jelmer tells me that the computational complexity of GBS in the presence of losses hasn’t yet been analyzed in the relevant regime, though there’s been work aiming in that direction. A second source of uncertainty is simply that the classical simulations work in a certain limit—namely, fixing the rate of noise and then letting the numbers of photons and modes go to infinity—but any real experiment has a fixed number of photons and modes (in USTC’s case, they’re ~50 and ~100 respectively). It wouldn’t do to reject USTC’s claim via a theoretical asymptotic argument that would equally well apply to any non-error-corrected quantum supremacy demonstration!

OK, but if an efficient classical simulation of lossy GBS experiments exists, then what is it? How does it work? It turns out that we have a plausible candidate for the answer to that, originating with a 2014 paper by Gil Kalai and Guy Kindler. Given a beamsplitter network, Kalai and Kindler considered an infinite hierarchy of better and better approximations to the BosonSampling distribution for that network. Roughly speaking, at the first level (k=1), one pretends that the photons are just classical distinguishable particles. At the second level (k=2), one correctly models quantum interference involving pairs of photons, but none of the higher-order interference. At the third level (k=3), one correctly models three-photon interference, and so on until k=n (where n is the total number of photons), when one has reproduced the original BosonSampling distribution. At least when k is small, the time needed to spoof outputs at the kth level of the hierarchy should grow like nk. As theoretical computer scientists, Kalai and Kindler didn’t care whether their hierarchy produced any physically realistic kind of noise, but later work, by Shchesnovich, Renema, and others, showed that (as it happens) it does.

In its original paper, the USTC team ruled out the possibility that the first, k=1 level of this hierarchy could explain its experimental results. More recently, in response to inquiries by Sergio, Gil, Jelmer, and others, Chaoyang tells me they’ve ruled out the possibility that the k=2 level can explain their results either. We’re now eagerly awaiting the answer for larger values of k.

Let me add that I owe Gil Kalai the following public mea culpa. While his objections to QC have often struck me as unmotivated and weird, in the case at hand, Gil’s 2014 work with Kindler is clearly helping drive the scientific discussion forward. In other words, at least with BosonSampling, it turns out that Gil put his finger precisely on a key issue. He did exactly what every QC skeptic should do, and what I’ve always implored the skeptics to do.

II. BosonSampling vs. Random Circuit Sampling: A Tale of HOG and CHOG and LXEB

There’s a broader question: why should skeptics of a BosonSampling experiment even have to think about messy details like the rate of photon losses? Why shouldn’t that be solely the experimenters’ job?

To understand what I mean, consider the situation with Random Circuit Sampling, the task Google demonstrated last year with 53 qubits. There, the Google team simply collected the output samples and fed them into a benchmark that they called “Linear Cross-Entropy” (LXEB), closely related to what Lijie Chen and I called “Heavy Output Generation” (HOG) in a 2017 paper. With suitable normalization, an ideal quantum computer would achieve an LXEB score of 2, while classical random guessing would achieve an LXEB score of 1. Crucially, according to a 2019 result by me and Sam Gunn, under a plausible (albeit strong) complexity assumption, no subexponential-time classical spoofing algorithm should be able to achieve an LXEB score that’s even slightly higher than 1. In its experiment, Google reported an LXEB score of about 1.002, with a confidence interval much smaller than 0.002. Hence: quantum supremacy (subject to our computational assumption), with no further need to know anything about the sources of noise in Google’s chip! (More explicitly, Boixo, Smelyansky, and Neven did a calculation in 2017 to show that the Kalai-Kindler type of spoofing strategy definitely isn’t going to work against RCS and Linear XEB, with no computational assumption needed.)

So then why couldn’t the USTC team do something analogous with BosonSampling? Well, they tried to. They defined a measure that they called “HOG,” although it’s different from my and Lijie Chen’s HOG, more similar to a cross-entropy. Following Jelmer, let me call their measure CHOG, where the C could stand for Chinese, Chaoyang’s, or Changed. They calculated the CHOG for their experimental samples, and showed that it exceeds the CHOG that you’d get from the k=1 and k=2 levels of the Kalai-Kindler hierarchy, as well as from various other spoofing strategies, thereby ruling those out as classical explanations for their results.

The trouble is this: unlike with Random Circuit Sampling and LXEB, with BosonSampling and CHOG, we know that there are fast classical algorithms that achieve better scores than the trivial algorithm, the algorithm that just picks samples at random. That follows from Kalai and Kindler’s work, and it even more simply follows from a 2013 paper by me and Arkhipov, entitled “BosonSampling Is Far From Uniform.” Worse yet, with BosonSampling, we currently have no analogue of my 2019 result with Sam Gunn: that is, a result that would tell us (under suitable complexity assumptions) the highest possible CHOG score that we expect any efficient classical algorithm to be able to get. And since we don’t know exactly where that ceiling is, we can’t tell the experimentalists exactly what target they need to surpass in order to claim quantum supremacy. Absent such definitive guidance from us, the experimentalists are left playing whac-a-mole against this possible classical spoofing strategy, and that one, and that one.

This is an issue that I and others were aware of for years, although the new experiment has certainly underscored it. Had I understood just how serious the USTC group was about scaling up BosonSampling, and fast, I might’ve given the issue some more attention!

III. Fock vs. Gaussian BosonSampling

Above, I mentioned another complication in understanding the USTC experiment: namely, their reliance on Gaussian BosonSampling (GBS) rather than Fock BosonSampling (FBS), sometimes also called Aaronson-Arkhipov BosonSampling (AABS). Since I gave this issue short shrift in my previous post, let me make up for it now.

In FBS, the initial state consists of either 0 or 1 photons in each input mode, like so: |1,…,1,0,…,0⟩. We then pass the photons through our beamsplitter network, and measure the number of photons in each output mode. The result is that the amplitude of each possible output configuration can be expressed as the permanent of some n×n matrix, where n is the total number of photons. It was interest in the permanent, which plays a central role in classical computational complexity, that led me and Arkhipov to study BosonSampling in the first place.

The trouble is, preparing initial states like |1,…,1,0,…,0⟩ turns out to be really hard. No one has yet build a source that reliably outputs one and only one photon at exactly a specified time. This led two experimental groups to propose an idea that, in a 2013 post on this blog, I named Scattershot BosonSampling (SBS). In SBS, you get to use the more readily available “Spontaneous Parametric Down-Conversion” (SPDC) photon sources, which output superpositions over different numbers of photons, of the form $$\sum_{n=0}^{\infty} \alpha_n |n \rangle |n \rangle, $$ where αn decreases exponentially with n. You then measure the left half of each entangled pair, hope to see exactly one photon, and are guaranteed that if you do, then there’s also exactly one photon in the right half. Crucially, one can show that, if Fock BosonSampling is hard to simulate approximately using a classical computer, then the Scattershot kind must be as well.

OK, so what’s Gaussian BosonSampling? It’s simply the generalization of SBS where, instead of SPDC states, our input can be an arbitrary “Gaussian state”: for those in the know, a state that’s exponential in some quadratic polynomial in the creation operators. If there are m modes, then such a state requires ~m2 independent parameters to specify. The quantum optics people have a much easier time creating these Gaussian states than they do creating single-photon Fock states.

While the amplitudes in FBS are given by permanents of matrices (and thus, the probabilities by the absolute squares of permanents), the probabilities in GBS are given by a more complicated matrix function called the Hafnian. Roughly speaking, while the permanent counts the number of perfect matchings in a bipartite graph, the Hafnian counts the number of perfect matchings in an arbitrary graph. The permanent and the Hafnian are both #P-complete. In the USTC paper, they talk about yet another matrix function called the “Torontonian,” which was invented two years ago. I gather that the Torontonian is just the modification of the Hafnian for the situation where you only have “threshold detectors” (which decide whether one or more photons are present in a given mode), rather than “number-resolving detectors” (which count how many photons are present).

If Gaussian BosonSampling includes Scattershot BosonSampling as a special case, and if Scattershot BosonSampling is at least as hard to simulate classically as the original BosonSampling, then you might hope that GBS would also be at least as hard to simulate classically as the original BosonSampling. Alas, this doesn’t follow. Why not? Because for all we know, a random GBS instance might be a lot easier than a random SBS instance. Just because permanents can be expressed using Hafnians, doesn’t mean that a random Hafnian is as hard as a random permanent.

Nevertheless, I think it’s very likely that the sort of analysis Arkhipov and I did back in 2011 could be mirrored in the Gaussian case. I.e., instead of starting with reasonable assumptions about the distribution and hardness of random permanents, and then concluding the classical hardness of approximate BosonSampling, one would start with reasonable assumptions about the distribution and hardness of random Hafnians (or “Torontonians”), and conclude the classical hardness of approximate GBS. But this is theoretical work that remains to be done!

IV. Application to Molecular Vibronic Spectra?

In 2014, Alan Aspuru-Guzik and collaborators put out a paper that made an amazing claim: namely that, contrary to what I and others had said, BosonSampling was not an intrinsically useless model of computation, good only for refuting QC skeptics like Gil Kalai! Instead, they said, a BosonSampling device (specifically, what would later be called a GBS device) could be directly applied to solve a practical problem in quantum chemistry. This is the computation of “molecular vibronic spectra,” also known as “Franck-Condon profiles,” whatever those are.

I never understood nearly enough about chemistry to evaluate this striking proposal, but I was always a bit skeptical of it, for the following reason. Nothing in the proposal seemed to take seriously that BosonSampling is a sampling task! A chemist would typically have some specific numbers that she wants to estimate, of which these “vibronic spectra” seemed to be an example. But while it’s often convenient to estimate physical quantities via Monte Carlo sampling over simulated observations of the physical system you care about, that’s not the only way to estimate physical quantities! And worryingly, in all the other examples we’d seen where BosonSampling could be used to estimate a number, the same number could also be estimated using one of several polynomial-time classical algorithms invented by Leonid Gurvits. So why should vibronic spectra be an exception?

After an email exchange with Alex Arkhipov, Juan Miguel Arrazola, Leonardo Novo, and Raul Garcia-Patron, I believe we finally got to the bottom of it, and the answer is: vibronic spectra are not an exception.

In terms of BosonSampling, the vibronic spectra task is simply to estimate the probability histogram of some weighted linear combination like $$ w_1 s_1 + \cdots + w_ m s_m, $$ where w1,…,wm are fixed real numbers, and (s1,…,sm) is a possible outcome of the BosonSampling experiment, si representing the number of photons observed in mode i. Alas, while it takes some work, it turns out that Gurvits’s classical algorithms can be adapted to estimate these histograms. Granted, running the actual BosonSampling experiment would provide slightly more detailed information—namely, some exact sampled values of $$ w_1 s_1 + \cdots + w_ m s_m, $$ rather than merely additive approximations to the values—but since we’d still need to sort those sampled values into coarse “bins” in order to compute a histogram, it’s not clear why that additional precision would ever be of chemical interest.

This is a pity, since if the vibronic spectra application had beaten what was doable classically, then it would’ve provided not merely a first practical use for BosonSampling, but also a lovely way to verify that a BosonSampling device was working as intended.

V. Application to Finding Dense Subgraphs?

A different potential application of Gaussian BosonSampling, first suggested by the Toronto-based startup Xanadu, is finding dense subgraphs in a graph. (Or at least, providing an initial seed to classical optimization methods that search for dense subgraphs.)

This is an NP-hard problem, so to say that I was skeptical of the proposal would be a gross understatement. Nevertheless, it turns out that there is a striking observation by the Xanadu team at the core of their proposal: namely that, given a graph G and a positive even integer k, a GBS device can be used to sample a random subgraph of G of size k, with probability proportional to the square of the number of perfect matchings in that subgraph. Cool, right? And potentially even useful, especially if the number of perfect matchings could serve as a rough indicator of the subgraph’s density! Alas, Xanadu’s Juan Miguel Arrazola himself recently told me that there’s a cubic-time classical algorithm for the same sampling task, so that the possible quantum speedup that one could get from GBS in this way is at most polynomial. The search for a useful application of BosonSampling continues!


And that’s all for now! I’m grateful to all the colleagues I talked to over the last couple weeks, including Alex Arkhipov, Juan Miguel Arrazola, Sergio Boixo, Raul Garcia-Patron, Leonid Gurvits, Gil Kalai, Chaoyang Lu, John Martinis, and Jelmer Renema, while obviously taking sole responsibility for any errors in the above. I look forward to a spirited discussion in the comments, and of course I’ll post updates as I learn more!

Beth Harmon and the Inner World of Squares

December 14th, 2020

The other day Dana and I finished watching The Queen’s Gambit, Netflix’s fictional saga of an orphaned girl in the 1960’s, Beth Harmon, who breaks into competitive chess and destroys one opponent after the next in her quest to become the world champion, while confronting her inner demons and addictions.

The show is every bit as astoundingly good as everyone says it is, and I might be able to articulate why. It’s because, perhaps surprisingly given the description, this is a story where chess actually matters—and indeed, the fact that chess matters so deeply to Beth and most of the other characters is central to the narrative.  (As in two pivotal scenes where Beth has sex with a male player, and then either she or he goes right back to working on chess.)

I’ve watched a lot of TV shows and movies, supposedly about scientists, where the science was just an interchangeable backdrop to what the writers clearly regarded as a more important story.  (As one random example, the drama NUMB3RS, supposedly about an FBI mathematician, where “math” could’ve been swapped out for “mystical crime-fighting intuition” with barely any change.)

It’s true that a fictional work about scientists shouldn’t try to be a science documentary, just like Queen’s Gambit doesn’t try to be a chess documentary.  But if you’re telling a story about characters who are obsessed with topic X, then you need to make their obsession plausible, make the entire story hinge on it, and even make the audience vicariously feel the same obsession.

This is precisely what Queen’s Gambit does for chess.  It’s a chess drama where the characters are constantly talking about chess, thinking about chess, and playing chess—and that actually succeeds in making that riveting.  (Even if most of the audience can’t follow what’s happening on the board, it turns out that it doesn’t matter, since you can simply convey the drama through the characters’ faces and the reactions of those around them.)

Granted, a few aspects of competitive chess in the series stood out as jarringly unrealistic even to a novice like me: for example, the almost complete lack of draws.  But as for the board positions—well, apparently Kasparov was a consultant, and he helped meticulously design each one to reflect the characters’ skill levels and what was happening in the plot.

While the premise sounds like a feminist wish-fulfillment fantasy—orphan girl faces hundreds of intimidating white men in the sexist 1960s, orphan girl beats them all at their own game with style and aplomb—this is not at all a MeToo story, or a story about male crudity or predation.  It’s after bigger fish than that.  The series, you might say, conforms to all the external requirements of modern woke ideology, yet the actual plot subverts the tenets of that ideology, or maybe just ignores them, in its pursuit of more timeless themes.

At least once Beth Harmon enters the professional chess world, the central challenges she needs to overcome are internal and mental—just like they’re supposed to be in chess.  It’s not the Man or the Patriarchy or any other external power (besides, of course, skilled opponents) holding her down.  Again and again, the top male players are portrayed not as sexist brutes but as gracious, deferential, and even awestruck by Beth’s genius after she’s humiliated them on the chessboard.  And much of the story is about how those vanquished opponents then turn around and try to help Beth, and about how she needs to learn to accept their help in order to evolve as a player and a human being.

There’s also that, after defeating male player after male player, Beth sleeps with them, or at least wants to.  I confess that, as a teenager, I would’ve found that unlikely and astonishing.  I would’ve said: obviously, the only guys who’d even have a chance to prove themselves worthy of the affection of such a brilliant and unique woman would be those who could beat her at chess.  Anyone else would just be dirt between her toes.  In the series, though, each male player propositions Beth only after she’s soundly annihilated him.  And she’s never once shown refusing.

Obviously, I’m no Beth Harmon; I’ll never be close in my field to what she is in hers.  Equally obviously, I grew up in a loving family, not an orphanage.  Still, I was what some people would call a “child prodigy,” what with the finishing my PhD at 22 and whatnot, so naturally that colored my reaction to the show.

There’s a pattern that goes like this: you’re obsessively interested, from your first childhood exposure, in something that most people aren’t.  Once you learn what the something is, it’s evident to you that your life’s vocation couldn’t possibly be anything else, unless some external force prevents you.  Alas, in order to pursue the something, you first need to get past bullies and bureaucrats, who dismiss you as a nobody, put barriers in your way, despise whatever you represent to them.  After a few years, though, the bullies can no longer stop you: you’re finally among peers or superiors in your chosen field, regularly chatting with them on college campuses or at conferences in swanky hotels, and the main limiting factor is just the one between your ears. 

You feel intense rivalries with your new colleagues, of course, you desperately want to excel them, but the fact that they’re all on the same obsessive quest as you means you can never actually hate them, as you did the bureaucrats or the bullies.  There’s too much of you in your competitors, and of them in you.

As you pursue your calling, you feel yourself torn in the following way.  On the one hand, you feel close to a moral obligation to humanity not to throw away whatever “gift” you were “given” (what loaded terms), to take the calling as far as it will go.  On the other hand, you also want the same things other people want, like friendship, validation, and of course sex.

In such a case, two paths naturally beckon.  The first is that of asceticism: making a virtue of eschewing all temporal attachments, romance or even friendship, in order to devote yourself entirely to the calling.  The second is that of renouncing the calling, pretending it never existed, in order to fit in and have a normal life.  Your fundamental challenge is to figure out a third path, to plug yourself into a community where the relentless pursuit of the unusual vocation and the friendship and the sex can all complement each other rather than being at odds.

It would be an understatement to say that I have some familiarity with this narrative arc.

I’m aware, of course, of the irony, that I can identify with so many contours of Beth Harmon’s journey—I, Scott Aaronson, who half the Internet denounced six years ago as a misogynist monster who denies the personhood and interiority of women.  In that life-alteringly cruel slur, there was a microscopic grain of truth, and it’s this: I’m not talented at imagining myself into the situations of people different from me.  It’s never been my strong suit.  I might like and admire people different from me, I might sympathize with their struggles and wish them every happiness, but I still don’t know what they’re thinking until they tell me.  And even then, I don’t fully understand it.

As one small but illustrative example, I have no intuitive understanding—zero—of what it’s like to be romantically attracted to men, or what any man could do or say or look like that could possibly be attractive to women.  If you have such an understanding, then imagine yourself sighted and me blind.  Intellectually, I might know that confidence or height or deep brown eyes or brooding artistry are supposed to be attractive in human males, but only because I’m told.  As far as my intuition is concerned, pretty much all men are equally hairy, smelly, and gross, a large fraction of women are alluring and beautiful and angelic, and both of those are just objective features of reality that no one could possibly see otherwise.

Thus, whenever I read or watch fiction starring a female protagonist who dates men, it’s very easy for me to imagine that protagonist judging me, enumerating my faults, and rejecting me, and very hard for me to do what I’m supposed to do, which is to put myself into her shoes.  I could watch a thousand female protagonists kiss a thousand guys onscreen, or wake up in bed next to them, and the thousandth-and-first time I’d still be equally mystified about what she saw in such a sweaty oaf and why she didn’t run from him screaming, and I’d snap out of vicariously identifying with her.  (Understanding gay men of course presents similar difficulties; understanding lesbians is comparatively easy.)

It’s possible to overcome this, but it takes an extraordinary female protagonist, brought to life by an extraordinary writer.  Off the top of my head, I can think of only a few.  There were Renee Feuer and Eva Mueller, the cerebral protagonists of Rebecca Newberger Goldstein’s The Mind-Body Problem and The Late Summer Passion of a Woman of Mind.  Maybe Ellie Arroway from Carl Sagan’s Contact.  And then there’s Beth Harmon.  With characters like these, I can briefly enter a space where their crushes on men seem no weirder or more inexplicable to me than my own teenage crushes … just, you know, inverted.  Sex is in any case secondary to the character’s primary urge to discover timeless truths, an urge that I fully understand because I’ve shared it.

Granted, the timeless truths of chess, an arbitrary and invented game, are less profound than those of quantum gravity or the P vs. NP problem, but the psychology is much the same, and The Queen’s Gambit does a good job of showing that.  To understand the characters of this series is to understand why they could be happier to lose an interesting game than to win a boring one.  And I could appreciate that, even if I was by no means the strongest player at my elementary school’s chess club, and the handicap with which I can beat my 7-year-old daughter is steadily decreasing.

Happy Chanukah / Vaccine Approval Day!

December 11th, 2020
  1. Inspired by my survey article, John Pavlus has now published an article on Busy Beaver for Quanta magazine.
  2. This week, I flitted back and forth between two virtual conferences: the Institute for Advanced Study’s Online Workshop on Qubits and Black Holes (which I co-organized with Juan Maldacena and Mark Van Raamsdonk), and Q2B (Quantum 2 Business) 2020, organized by QC Ware, for which I did my now-annual Ask-Me-Anything session. It was an interesting experience, switching between Euclidean path integrals and replica wormholes that I barely understood, and corporate pitches for near-term quantum computing that I … well, did understand! Anyway, happy to discuss either conference in the comments.
  3. For anyone interested in the new Chinese quantum supremacy claim based on Gaussian BosonSampling—the story has developing rapidly all week, with multiple groups trying to understand the classical difficulty of simulating the experiment. I’ll plan to write a followup post soon!
  4. The Complexity Zoo has now officially moved from the University of Waterloo to complexityzoo.net, hosted by the LessWrong folks! Thanks so much to Oliver Habryka for setting this up. Update (Dec. 12): Alas, complexityzoo.com no longer works if you use https. I don’t know how to fix it—the Bluehost control panel provides no options—and I’m not at a point in life where I can deal again with Bluehost SSL certificate hell. (How does everyone else deal with this shit? That’s the one part I don’t understand.) So, for now, you’ll need to update your bookmarks to complexityzoo.net.
  5. In return for his help with Zoo, Oliver asked me to help publicize a handsome $29 five-book set, “A Map that Reflects the Territory,” containing a selection of the best essays from LessWrong, including multiple essays by the much-missed Scott Alexander, and an essay on common knowledge inspired by my own Common Knowledge and Aumann’s Agreement Theorem. (See also the FAQ.) If you know any LW fans, I can think of few better gifts to go under their Christmas tree or secular rationalist equivalent.

Shor’s algorithm in higher dimensions: Guest post by Greg Kuperberg

December 7th, 2020

Upbeat advertisement: If research in QC theory or CS theory otherwise is your thing, then wouldn’t you like to live in peaceful, quiet, bicycle-based Davis, California, and be a faculty member at the large, prestigious, friendly university known as UC Davis? In the QCQI sphere, you’d have Marina RadulaskiBruno NachtergaeleMartin FraasMukund RangamaniVeronika Hubeny, and Nick Curro as faculty colleagues, among others; and yours truly, and hopefully more people in the future. This year the UC Davis CS department has a faculty opening in quantum computing, and another faculty opening in CS theory including quantum computing. If you are interested, then time is of the essence, since the full-consideration deadline is December 15.


In this guest post, I will toot my own horn about a paper in progress (hopefully nearly finished) that goes back to the revolutionary early days of quantum computing, namely Shor’s algorithm. The takeway: I think that the strongest multidimensional generalization of Shor’s algorithm has been missed for decades. It appears to be a new algorithm that does more than the standard generalization described by Kitaev. (Scott wanted me to channel Captain Kirk and boldly go with a takeaway, so I did.)

Unlike Shor’s algorithm proper, I don’t know of any dramatic applications of this new algorithm. However, more than one quantum algorithm was discovered just because it looked interesting, and then found applications later. The input to Shor’s algorithm is a function \(f:\mathbb{Z} \to S\), in other words a symbol-valued function \(f\) on the integers, which is periodic with an unknown period \(p\) and otherwise injective. In equations, \(f(x) = f(y)\) if only if \(p\) divides \(x-y\). In saying that the input is a function \(f\), I mean that Shor’s algorithm is provided with an algorithm to compute \(f\) efficiently. Shor’s algorithm itself can then find the period \(p\) in (quantum) polynomial time in the number of digits of \(p\). (Not polynomial time in \(p\), polynomial time in its logarithm.) If you’ve heard that Shor’s algorithm can factor integers, that is just one special case where \(f(x) = a^x\) mod \(N\), the integer to factor. In its generalized form, Shor’s algorithm is miraculous. In particular, if \(f\) is a black-box function, then it is routine to prove that any classical algorithm to do the same thing needs exponentially many values of \(f\), or values \(f(x)\) where \(x\) has exponentially many digits.

Shor’s algorithm begat the Shor-Kitaev algorithm, which does the same thing for a higher dimensional periodic function \(f:\mathbb{Z}^d \to S\), where \(f\) is now periodic with respect to a lattice \(L\). The Shor-Kitaev algorithm in turn begat the hidden subgroup problem (called HSP among friends), where \(\mathbb{Z}\) or \(\mathbb{Z}^d\) is replaced by a group \(G\), and now \(f\) is \(L\)-periodic for some subgroup \(L\). HSP varies substantially in both its computationally difficulty and its complexity status, depending on the structure of \(G\) as well as optional restrictions on \(L\).

A funny thing happened on the way to the forum in later work on HSP. Most of the later work has been in the special case that the ambient group \(G\) is finite, even though \(G\) is infinite in the famous case of Shor’s algorithm. My paper-to-be explores the hidden subgroup problem in various cases when \(G\) is infinite. In particular, I noticed that even the case \(G = \mathbb{Z}^d\) isn’t fully solved, because the Shor-Kitaev algorithm makes the extra assumption that \(L\) is a maximum-rank lattice, or equivalently that \(L\) a finite-index subgroup of \(\mathbb{Z}^d\). As far as I know, the more general case where \(L\) might have lower rank wasn’t treated previously. I found an extension of Shor-Kitaev to handle this case, which is I will sketch after discussing some points about HSP in general.

Quantum algorithms for HSP

Every known quantum algorithm for HSP has the same two opening steps. First prepare an equal superposition \(|\psi_G\rangle\) of “all” elements of the ambient group \(G\), then apply a unitary form of the hiding function \(f\) to get the following: \[ U_f|\psi_G\rangle \propto \sum_{x \in G} |x,f(x)\rangle. \] Actually, you can only do exactly this when \(G\) is a finite group. You cannot make an equal quantum superposition on an infinite set, for the same reason that you cannot choose an integer uniformly at random from among all of the integers: It would defy the laws of probability. Since computers are finite, a realistic quantum algorithm cannot make an unequal quantum superposition on an infinite set either. However, if \(G\) is a well-behaved infinite group, then you can approximate the same idea by making an equal superposition on a large but finite box \(B \subseteq G\) instead: \[ U_f|\psi_G\rangle \propto \sum_{x \in B \subseteq G} |x,f(x)\rangle. \] Quantum algorithms for HSP now follow a third counterintuitive “step”, namely, that you should discard the output qubits that contain the value \(f(x)\). You should take the values of \(f\) to be incomprehensible data, encrypted for all you know. A good quantum algorithm evaluates \(f\) too few times to interpret its output, so you might as well let it go. (By contrast, a classical algorithm is forced to dig for the only meaningful information that the output of \(f\) to have. Namely, it has to keep searching until it finds equal values.) What remains, want what turns out to be highly valuable, is the input state in a partially measured form. I remember joking with Cris Moore about the different ways of looking at this step:

  1. You can measure the output qubits.
  2. The janitor can fish the output qubits out of the trash and measure them for you.
  3. You can secretly not measure the output qubits and say you did.
  4. You can keep the output qubits and say you threw them away.

Measuring the output qubits wins you the purely mathematical convenience that the posterior state on the input qubits is pure (a vector state) rather than mixed (a density matrix). However, since no use is made of the measured value, it truly makes no difference for the algorithm.

The final universal step for all HSP quantum algorithms is to apply a quantum Fourier transform (or QFT) to the input register and measure the resulting Fourier mode. This might seem like a creative step that may or may not be a good idea. However, if you have an efficient algorithm for the QFT for your particular group \(G\), then you might as well do this, because (taking the interpretation that you threw away the output register) the environment already knows the Fourier mode. You can assume that this Fourier mode has been published in the New York Times, and you won’t lose anything by reading the papers.

Fourier modes and Fourier stripes

I’ll now let \(G = \mathbb{Z}^d\) and make things more explicit, for starters by putting arrows on elements \(\vec{x} \in \mathbb{Z}^d\) to indicate that they are lattice vectors. The standard begining produces a superposition \(|\psi_{L+\vec{v}}\rangle\) on a translate \(L+\vec{v}\) of the hidden lattice \(L\). (Again, \(L\) is the periodicity of \(f\).) If this state could be an equal superposition on the infinite set \(L+\vec{v}\), and if you could do a perfect QFT on the infinite group \(\mathbb{Z}^d\), then the resulting Fourier mode would be a randomly chosen element of a certain dual group \(L^\# \subseteq (\mathbb{R}/\mathbb{Z})^d\) inside the torus of Fourier modes of \(\mathbb{Z}^d\). Namely, \(L^\#\) consists of those vectors \(\vec{y} \in (\mathbb{R}/\mathbb{Z})^d\) whose such that the dot product \(\vec{x} \cdot \vec{y}\) is an integer for every \(\vec{x} \in L\). (If you expected the Fourier dual of the integers \(\mathbb{Z}\) to be a circle \(\mathbb{R}/2\pi\mathbb{Z}\) of length \(2\pi\), I found it convenient here to rescale it to a circle \(\mathbb{R}/\mathbb{Z}\) of length 1. This is often considered gauche these days, like using \(h\) instead of \(\hbar\) in quantum mechanics, but in context it’s okay.) In principle, you can learn \(L^\#\) from sampling it, and then learn \(L\) from \(L^\#\). Happily, the unknown and irrelevant translation vector \(\vec{v}\) is erased in this method.

In practice, it’s not so simple. As before, you cannot actually make an equal superposition on all of \(L+\vec{v}\), but only trimmed to a box \(B \subseteq \mathbb{Z}^d\). If you have \(q\) qubits available for each coordinate of \(\mathbb{Z}^d\), then \(B\) might be a \(d\)-dimensional cube with \(Q = 2^q\) lattice points in each direction. Following Peter Shor’s famous paper, the standard thing to do here is to identify \(B\) with the finite group \((\mathbb{Z}/Q)^d\) and do the QFT there instead. This is gauche as pure mathematics, but it’s reasonable as computer science. In any case, it works, but it comes at a price. You should rescale the resulting Fourier mode \(\vec{y} \in (\mathbb{Z}/Q)^d\) as \(\vec{y}_1 = \vec{y}/Q\) to match it to the torus \((\mathbb{R}/\mathbb{Z})^d\). Even if you do that, \(\vec{y}_1\) is not actually a uniformly random element of \(L^\#\), but rather a noisy, discretized approximation of one.

In Shor’s algorithm, the remaining work is often interpreted as the post-climax. In this case \(L = p\mathbb{Z}\), where \(p\) is the hidden period of \(f\), and \(L^\#\) consists of the multiples of \(1/p\) in \(\mathbb{R}/\mathbb{Z}\). The Fourier mode \(y_1\) (skipping the arrow since we are in one dimension) is an approximation to some fraction \(r/p\) with roughly \(q\) binary digits of precision. (\(y_1\) is often but not always the very best binary approximation to \(r/p\) with the available precision.) If you have enough precision, you can learn a fraction from its digits, either in base 2 or in any base. For instance, if I’m thinking of a fraction that is approximately 0.2857, then 2/7 is much closer than any other fraction with a one-digit denominator. As many people know, and as Shor explained in his paper, continued fractions are an efficient and optimal algorithm for this in larger cases.

The Shor-Kitaev algorithm works the same way. You can denoise each coordinate of each Fourier example \(\vec{y}_1\) with the continued fraction algorithm to obtain an exact element \(\vec{y}_0 \in L^\#\). You can learn \(L^\#\) with a polynomial number of samples, and then learn \(L\) from that with integer linear algebra. However, this approach can only work if \(L^\#\) is a finite group, or equivalently when \(L\) has maximum rank \(d\). This condition is explicitly stated in Kitaev’s paper, and in most but not all of the papers and books that cite this algorithm. if \(L\) has maximum rank, then the picture in Fourier space looks like this: 

However, if \(L\) has rank \(\ell < d\), then \(L^\#\) is a pattern of \((k-\ell)\)-dimensional stripes, like this instead: 

In this case, as the picture indicates, each coordinate of \(\vec{y}_1\) is flat random and individually irreparable. If you knew the direction of the stripes, then you use could define a slanted coordinate system where some of the coordinates of \(\vec{y}_1\) could be repaired. But the tangent directions of \(L^\#\) essentially beg the question. They are the orthogonal space of \(L_\mathbb{R}\), the vector space subtended by the hidden subgroup \(L\). If you know \(L_\mathbb{R}\), then you can find \(L\) by running Shor-Kitaev in the lattice \(L_\mathbb{R} \cap \mathbb{Z}^d\).

My solution to this conundrum is to observe that the multiples of a randomly chosen point \(\vec{y}_0\) in \(L^\#\) have a good chance of filling out \(L^\#\) adequately well, in particular to land near \(\vec{0}\) often enough to reveal the tangent directions of \(L^\#\). You have to make do with a noisy sample \(\vec{y}_1\) instead, but by making the QFT radix \(Q\) large enough, you can reduce the noise well enough for this to work. Still, even if you know that these small, high-quality multiples of \(\vec{y}_1\) exist, they are needles in an exponential haystack of bad multiples, so how do you find them? It turns out that the versatile LLL algorithm, which finds a basis of short vectors in a lattice, can be used here. The multiples of \(\vec{y}_0\) (say, for simplicity) aren’t a lattice, they are a dense orbit in \(L^\#\) or part of it. However, they are a shadow of a lattice one dimension higher, that you can supply to the LLL algorithm. This step produces lets you compute the linear span \(L_\mathbb{R}\) of \(L\) from its perpendicular space, and then as mentioned you can use Shor-Kitaev to learn the exact geometry of \(L\).

Quantum supremacy, now with BosonSampling

December 3rd, 2020

Update (12/5): The Google team, along with Gil Kalai, have raised questions about whether the results of the new BosonSampling experiment might be easier to spoof classically than the USTC team thought they were, because of a crucial difference between BosonSampling and qubit-based random circuit sampling. Namely, with random circuit sampling, the marginal distribution over any k output qubits (for small k) is exponentially close to the uniform distribution. With BosonSampling, by contrast, the marginal distribution over k output modes is distinguishable from uniform, as Arkhipov and I noted in a 2013 followup paper. On the one hand, these easily-detected nonuniformities provide a quick, useful sanity check for whether BosonSampling is being done correctly. On the other hand, they might also give classical spoofing algorithms more of a toehold. The question is whether, by spoofing the k-mode marginals, a classical algorithm could also achieve scores on the relevant “HOG” (Heavy Output Generation) benchmark that are comparable to what the USTC team reported.

One way or the other, this question should be resolvable by looking at the data that’s already been collected, and we’re trying now to get to the bottom of it. And having failed to flag this potential issue when I reviewed the paper, I felt a moral obligation at least to let my readers know about it as soon as I did. If nothing else, this is an answer to those who claim this stuff is all obvious. Please pardon the science underway!


A group led by Jianwei Pan and Chao-Yang Lu, based mainly at USTC in Hefei, China, announced today that it achieved BosonSampling with 40-70 detected photons—up to and beyond the limit where a classical supercomputer could feasibly verify the results. (Technically, they achieved a variant called Gaussian BosonSampling: a generalization of what I called Scattershot BosonSampling in a 2013 post on this blog.)

For more, see also Emily Conover’s piece in Science News, or Daniel Garisto’s in Scientific American, both of which I consulted on. (Full disclosure: I was one of the reviewers for the Pan group’s Science paper, and will be writing the Perspective article to accompany it.)

The new result follows the announcement of 14-photon BosonSampling by the same group a year ago. It represents the second time quantum supremacy has been reported, following Google’s celebrated announcement from last year, and the first time it’s been done using photonics rather than superconducting qubits.

As the co-inventor of BosonSampling (with Alex Arkhipov), obviously I’m gratified about this.

For anyone who regards it as boring or obvious, here and here is Gil Kalai, on this blog, telling me why BosonSampling would never scale beyond 8-10 photons. (He wrote that, if aliens forced us to try, then much like with the Ramsey number R(6,6), our only hope would be to attack the aliens.) Here’s Kalai making a similar prediction, on the impossibility of quantum supremacy by BosonSampling or any other means, in his plenary address to the International Congress of Mathematicians two years ago.

Even if we set aside the quantum computing skeptics, many colleagues told me they thought experimental BosonSampling was a dead end, because of photon losses and the staggering difficulty of synchronizing 50-100 single-photon sources. They said that a convincing demonstration of quantum supremacy would have to await the arrival of quantum fault-tolerance—or at any rate, some hardware platform more robust than photonics. I always agreed that they might be right. Furthermore, even if 50-photon BosonSampling was possible, after Google reached the supremacy milestone first with superconducting qubits, it wasn’t clear if anyone would still bother. Even when I learned a year ago about the USTC group’s intention to go for it, I was skeptical, figuring I’d believe it when I saw it.

Obviously the new result isn’t dispositive. Nevertheless, as someone whose intellectual origins are close to pure math, it’s strange and exciting to find myself in a field where, once in a while, the world itself gets to weigh in on a theoretical disagreement.

Since excitement is best when paired with accurate understanding, please help yourself to the following FAQ, which I might add more to over the next couple days.

What is BosonSampling? You must be new here! Briefly, it’s a proposal for achieving quantum supremacy by simply passing identical, non-interacting photons through an array of beamsplitters, and then measuring where they end up. For more: in increasing order of difficulty, here’s an MIT News article from back in 2011, here’s the Wikipedia page, here are my PowerPoint slides, here are my lecture notes from Rio de Janeiro, and here’s my original paper with Arkhipov.

What is quantum supremacy? Roughly, the use of a programmable or configurable quantum computer to solve some well-defined computational problem much faster than we know how to solve it with any existing classical computer. “Quantum supremacy,” a term coined by John Preskill in 2012, does not mean useful QC, or scalable QC, or fault-tolerant QC, all of which remain outstanding challenges. For more, see my Supreme Quantum Supremacy FAQ, or (e.g.) my recent Lytle Lecture for the University of Washington.

If Google already announced quantum supremacy a year ago, what’s the point of this new experiment? To me, at least, quantum supremacy seems important enough to do at least twice! Also, as I said, this represents the first demonstration that quantum supremacy is possible via photonics. Finally, as the authors point out, the new experiment has one big technical advantage compared to Google’s: namely, many more possible output states (~1030 of them, rather than a mere ~9 quadrillion). This makes it infeasible to calculate the whole probability distribution over outputs and store it on a gigantic hard disk (after which one could easily generate as many samples as one wanted), which is what IBM proposed doing in its response to Google’s announcement.

Is BosonSampling a form of universal quantum computing? No, we don’t even think it can simulate universal classical computing! It’s designed for exactly one task: namely, demonstrating quantum supremacy and refuting Gil Kalai. It might have some other applications besides that, but if so, they’ll be icing on the cake. This is in contrast to Google’s Sycamore processor, which in principle is a universal quantum computer, just with a severe limit on the number of qubits (53) and how many layers of gates one can apply to them (about 20).

Is BosonSampling at least a step toward universal quantum computing? I think so! In 2000, Knill, Laflamme, and Milburn (KLM) famously showed that pure, non-interacting photons, passing through a network of beamsplitters, are capable of universal QC, provided we assume one extra thing: namely, the ability to measure the photons at intermediate times, and change which beamsplitters to apply to the remaining photons depending on the outcome. In other words, “BosonSampling plus adaptive measurements equals universality.” Basically, KLM is the holy grail that experimental optics groups around the world have been working toward for 20 years, with BosonSampling just a more achievable pit stop along the way.

Are there any applications of BosonSampling? We don’t know yet. There are proposals in the literature to apply BosonSampling to vibronic spectra in quantum chemistry, finding dense subgraphs, and other problems, but I’m not yet sure whether these proposals will yield real speedups over the best we can do with classical computers, for a task of practical interest that involves estimating specific numbers (as opposed to sampling tasks, where BosonSampling almost certainly does yield exponential speedups, but which are rarely the thing practitioners directly care about). [See this comment for further discussion of the issues regarding dense subgraphs.] In a completely different direction, one could try to use BosonSampling to generate cryptographically certified random bits, along the lines of my proposal from 2018, much like one could with qubit-based quantum circuits.

How hard is it to simulate BosonSampling on a classical computer? As far as we know today, the difficulty of simulating a “generic” BosonSampling experiment increases roughly like 2n, where n is the number of detected photons. It might be easier than that, particularly when noise and imperfections are taken into account; and at any rate it might be easier to spoof the statistical tests that one applies to verify the outputs. I and others managed to give some theoretical evidence against those possibilities, but just like with Google’s experiment, it’s conceivable that some future breakthrough will change the outlook and remove the case for quantum supremacy.

Do you have any amusing stories? When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there. This was by far the most expensive referee report I ever wrote!

Also: when Covid first started, and facemasks were plentiful in China but almost impossible to get in the US, Chao-Yang Lu, one of the leaders of the new work and my sometime correspondent on the theory of BosonSampling, decided to mail me a box of 200 masks (I didn’t ask for it). I don’t think that influenced my later review, but it was appreciated nonetheless.

Huge congratulations to the whole team for their accomplishment!

Happy Thanksgiving Y’All!

November 25th, 2020

While a lot of pain is still ahead, this year I’m thankful that a dark chapter in American history might be finally drawing to a close. I’m thankful that the mRNA vaccines actually work. I’m thankful that my family has remained safe, and I’m thankful for all the essential workers who’ve kept our civilization running.

A few things:

  1. Friend-of-the-blog Jelani Nelson asked me to advertise an important questionnaire for theoretical computer scientists, about what the future of STOC and FOCS should look like (for example, should they become all virtual?). It only takes 2 or 3 minutes to fill out (I just did).
  2. Here’s a podcast that I recently did with UT Austin undergraduate Dwarkesh Patel. (As usual, I recommend 2x speed to compensate for my verbal tics.)
  3. Feel free to use the comments on this post to talk about recent progress in quantum computing or computational complexity! Like, I dunno, a (sub)exponential black-box speedup for the adiabatic algorithm, or anti-concentration for log-depth random quantum circuits, or an improved shadow tomography procedure, or a quantum algorithm for nonlinear differential equations, or a barrier to proving strong 3-party parallel repetition, or equivalence of one-way functions and time-bounded Kolmogorov complexity, or turning any hard-on-average NP problem into one that’s guaranteed to have solutions.
  4. It’s funny how quantum computing, P vs. NP, and so forth can come to feel like just an utterly mundane day job, not something anyone outside a small circle could possibly want to talk about while the fate of civilization hangs in the balance. Sometimes it takes my readers to remind me that not only are these topics what brought most of you here in the first place, they’re also awesome! So, I’ll mark that down as one more thing to be thankful for.

Huck Finn and the gaslighting of America

November 23rd, 2020

For the past month, I’ve been reading The Adventures of Huckleberry Finn to my 7-year-old daughter Lily. Is she too young for it? Is there a danger that she’ll slip up and say the n-word in school? I guess so, maybe. But I found it worthwhile just for the understanding that lit up her face when she realized what it meant that Huck would help Jim escape slavery even though Huck really, genuinely believed that he’d burn in hell for it.

Huck Finn has been one of my favorite books since I was just slightly older than Lily. It’s the greatest statement by one of history’s greatest writers about human stupidity and gullibility and evil and greed, but also about the power of seeing what’s right in front of your nose to counteract those forces. (It’s also a go-to source for details of 19th-century river navigation.) It rocks.

The other day, after we finished a chapter, I asked Lily whether she thought that injustice against Black people in America ended with the abolition of slavery. No, she replied. I asked: how much longer did it continue for? She said she didn’t know. So I said: if I told you that once, people in charge of an American election tried to throw away millions of votes that came from places where Black people lived—supposedly because some numbers didn’t exactly add up, except they didn’t care about similar numbers not adding up in places where White people lived—how long ago would she guess that happened? 100 years ago? 50 years? She didn’t know. So I showed her the news from the last hour.

These past few weeks, my comment queue has filled with missives, most of which I’ve declined to publish, about the giant conspiracy involving George Soros and Venezuela and dead people, which fabricated the overwhelmingly Democratic votes from overwhelmingly Democratic cities like Philadelphia and Milwaukee and Detroit (though for some reason, they weren’t quite as overwhelmingly Democratic as in other recent elections), while for some reason declining to help Democrats in downballot races. Always, these commenters confidently insist, I’m the Pravda-reading brainwashed dupe, I’m the unreasonable one, if I don’t accept this.

This is the literal meaning of “gaslighting”: the intentional construction of an alternate reality so insistently as to make the sane doubt their sanity. It occurred to me: Huck Finn could be read as an extended fable about gaslighting. The Grangerfords make their deadly feud with the Shepherdsons seem normal and natural. The fraudulent King and Duke make Huck salute them as royalty. Tom convinces Huck that the former’s harebrained schemes for freeing Jim are just the way it’s done, and Huck is an idiot for preferring the simplistic approach of just freeing him. And of course, the entire culture gaslights Huck that good is evil and evil is good. Huck doesn’t fight the gaslighting as hard as we’d like him to, but he develops as a character to the extent he does.

Today, the Confederacy—which, as we’ve learned the past five years, never died, and is as alive and angry now as it was in Twain’s time—is trying to win by gaslighting what it couldn’t win at Antietam and Gettysburg and Vicksburg. It’s betting that if it just insists, adamantly enough, that someone who lost an election by hundreds of thousands of votes spread across multiple states actually won the election, then it can bend the universe to its will.

Glued to the news, listening to Giuliani and McEnany and so on, reading the Trump campaign’s legal briefs, I keep asking myself one question: do they actually believe this shit? Some of the only insight I got about that question came from a long piece by Curtis Yarvin a.k.a. Mencius Moldbug, who’s been called one of the leading thinkers of neoreaction and who sometimes responds to this blog. Esoterically, Yarvin says that he actually prefers a Biden victory, but only because Trump has proven himself unworthy by submitting himself to nerdy electoral rules rather than simply seizing power. (If that’s not quite what Yarvin meant—well, I’m about as competent to render his irony-soaked meanings in plain language as I’d be to render Heidegger or Judith Butler!)

As for whether the election was “fraudulent,” here’s Yarvin’s key passage:

The fundamental purpose of a democratic election is to test the strength of the sides in a civil conflict, without anyone actually getting hurt. The majority wins because the strongest side would win … But this guess is much better if it actually measures humans who are both willing and able to walk down the street and show up. Anyone who cannot show up at the booth is unlikely to show up for the civil war. This is one of many reasons that an in-person election is a more accurate election. (If voters could be qualified by physique, it would be even more accurate) … My sense is that in many urban communities, voting by proxy in some sense is the norm. The people whose names are on the ballots really exist; and almost all of them actually did support China Joe. Or at least, preferred him. The extent to which they perform any tangible political action, including physically going to the booth, is very low; so is their engagement with the political system. They do not watch much CNN. The demand for records of their engagement is very high, because each such datum cancels out some huge, heavily-armed redneck with a bass boat. This is why, in the data, these cities look politics-obsessed, but photos of the polling places look empty. Most votes from these communities are in some sense “organized” … Whether or not such a design constitutes “fraud” is the judge’s de gustibus.

Did you catch that? Somehow, Yarvin manages to insinuate that votes for Biden are plausibly fraudulent and plausibly shouldn’t count—at least if they were cast by mail, in “many urban communities” (which ones?), during a pandemic—even as Yarvin glaincingly acknowledges that the votes in question actually exist and are actually associated with Biden-preferring legal voters. This is gaslighting in pure, abstract form, unalloyed with the laughable claims about Hugo Chávez or Dominion Voting Systems.

What I find terrifying about gaslighting is that it’s so effective. In response to this post, I’ll again get long, erudite comments making the case that up is down, turkeys are mammals, and Trump won in a landslide. And simply to read and understand those comments, some part of me will need to entertain the idea that they might be right. Much like with Bigfoot theories, this will be purely a function of the effort the writers put in, not of any merit to the thesis.

And there’s a second thing I find terrifying about gaslighting. Namely: it turns me into an ally of the SneerClubbers. Like them, I feel barely any space left for rational discussion or argument. Like them, I find it difficult to think of an appropriate response to Trumpian conspiracy theorists except to ridicule them, shame them as racists, and try to mute their influence. Notably, public shaming (“[t]he Trump stain, the stain of racism that you, William Hartmann and Monica Palmer, have covered yourself in, is going to follow you throughout history”) seems to have actually worked last week to get the Wayne County Board of Canvassers to back down and certify the votes from Detroit. So why not try more of it?

Of course, even if I agree with the wokeists that there’s a line beyond which rational discussion can’t reach, I radically disagree with them about the line’s whereabouts. Here, for example, I try to draw mine generously enough to include any Republicans willing to stand up, however feebly, against the Trump cult, whereas the wokeists draw their line so narrowly as to exclude most Democrats (!).

There’s a more fundamental difference as well: the wokeists define their worldview in opposition to the patriarchy, the white male power structure, or whatever else is preventing utopia. I, taking inspiration from Huck, define my moral worldview in opposition to gaslighting itself, whatever its source, and in favor of acknowledging obvious realities (especially realities about any harm we might be causing others). Thus, it’s not just that I see no tension between opposing the excesses of the woke and opposing Trump’s attempted putsch—rather, it’s that my opposition to both comes from exactly the same source. It’s a source that, at least in me, often runs dry of courage, but I’ve found Huck Finn to be helpful in replenishing it, and for that I’m grateful.

Endnote: There are, of course, many actual security problems with the way we vote in the US, and there are computer scientists who’ve studied those problems for decades, rather than suddenly getting selectively interested in November 2020. If you’re interested, see this letter (“Scientists say no credible evidence of computer fraud in the 2020 election outcome, but policymakers must work with experts to improve confidence”), which was signed by 59 of the leading figures in computer security, including Ron Rivest, Bruce Schneier, Hovav Shacham, Dan Wallach, Ed Felten, David Dill, and my childhood best friend Alex Halderman.

Update: I just noticed this Twitter thread by friend-of-the-blog Sean Carroll, which says a lot of what I was trying to say here.

Annual post: Come join UT Austin’s Quantum Information Center!

November 18th, 2020

Hook ’em Hadamards!

If you’re a prospective PhD student: Apply here for the CS department (the deadline this year is December 15th), here for the physics department (the deadline is December 1st), or here for the ECE department (the deadline is 15th). GREs are not required this year because of covid. If you apply to CS and specify that you want to work with me, I’ll be sure to see your application. If you apply to physics or ECE, I won’t see your application, but once you arrive, I can sometimes supervise or co-supervise PhD students in other departments (or, of course, serve on their committees). In any case, everyone in the UT community is extremely welcome at our quantum information group meetings (which are now on Zoom, naturally, but depending on vaccine distribution, hopefully won’t be by the time you arrive!). Emailing me won’t make a difference. Admissions are very competitive, so apply broadly to maximize your chances.

If you’re a prospective postdoctoral fellow: By January 1, 2021, please email me a cover letter, your CV, and two or three of your best papers (links or attachments). Please also ask two recommenders to email me their letters by January 1. While my own work tends toward computational complexity, I’m open to all parts of theoretical quantum computing and information.

If you’re a prospective faculty member: Yes, faculty searches are still happening despite covid! Go here to apply for an opening in the CS department (which, in quantum computing, currently includes me and MIP*=RE superstar John Wright), or here to apply to the physics department (which, in quantum computing, currently includes Drew Potter, along with a world-class condensed matter group).

The Complexity Zoo needs a new home

November 12th, 2020

Update (Nov. 14): I now have a deluge of serious hosting offers—thank you so much, everyone! No need for more.


Since I’m now feeling better that the first authoritarian coup attempt in US history will probably sort itself out OK, here’s a real problem:

Nearly a score years ago, I created the Complexity Zoo, a procrastination project turned encyclopedia of complexity classes. Nearly half a score years ago, the Zoo moved to my former employer, the Institute for Quantum Computing in Waterloo, Canada, which graciously hosted it ever since. Alas, IQC has decided that it can no longer do so. The reason is new regulations in Ontario about the accessibility of websites, which the Zoo might be out of compliance with. My students and I were willing to look into what was needed—like, does the polynomial hierarchy need ramps between its levels or something? The best would be if we heard from actual blind or other disabled complexity enthusiasts about how we could improve their experience, rather than trying to parse bureaucratese from the Ontario government. But IQC informed us that in any case, they can’t deal with the potential liability and their decision is final. I thank them for hosting the Zoo for eight years.

Now I’m looking for a volunteer for a new host. The Zoo runs on the MediaWiki platform, which doesn’t work with my own hosting provider (Bluehost) but is apparently easy to set up if you, unlike me, are the kind of person who can do such things. The IQC folks kindly offered to help with the transfer; I and my students can help as well. It’s a small site with modest traffic. The main things I need are just assurances that you can host the site for a long time (“forever” or thereabouts), and that you or someone else in your organization will be reachable if the site goes down or if there are other problems. I own the complexityzoo.com domain and can redirect from there.

In return, you’ll get the immense prestige of hosting such a storied resource for theoretical computer science … plus free publicity for your cause or organization on Shtetl-Optimized, and the eternal gratitude of thousands of my readers.

Of course, if you’re into complexity theory, and you want to update or improve the Zoo while you’re at it, then so much the awesomer! It could use some updates, badly. But you don’t even need to know P from NP.

If you’re interested, leave a comment or shoot me an email. Thanks!!

Unrelated Announcement: I’ll once again be doing an Ask-Me-Anything session at the Q2B (“Quantum to Business”) conference, December 8-10. Other speakers include Umesh Vazirani, John Preskill, Jennifer Ouellette, Eric Schmidt, and many others. Since the conference will of course be virtual this year, registration is a lot cheaper than in previous years. Check it out! (Full disclosure: Q2B is organized by QC Ware, Inc., for which I’m a scientific advisor.)

On defeating a sociopath

November 9th, 2020

There are people who really, genuinely, believe, as far as you can dig down, that winning is everything—that however many lies they told, allies they betrayed, innocent lives they harmed, etc. etc., it was all justified by the fact that they won and their enemies lost. Faced with such sociopaths, people like me typically feel an irresistible compulsion to counterargue: to make the sociopath realize that winning is not everything, that truth and honor are terminal values as well; to subject the sociopath to the standards by which the rest of us are judged; to find the conscience that the sociopath buried even from himself and drag it out into the light. Let me know if you can think of any case in human history where such efforts succeeded, because I’m having difficulty doing so.

Clearly, in the vast majority of cases if not in all, the only counterargument that a sociopath will ever understand is losing. And yet not just any kind of losing suffices. For victims, there’s an enormous temptation to turn the sociopath’s underhanded tools against him, to win with the same deceit and naked power that the sociopath so gleefully inflicted on others. And yet, if that’s what it takes to beat him, then you have to imagine the sociopath deriving a certain perverse satisfaction from it.

Think of the movie villain who, as the panting hero stands over him with his lightsaber, taunts “Yes … yes … destroy me! Do it now! Feel the hate and the rage flow through you!” What happens next, of course, is that the hero angrily decides to give the villain one more chance, the ungrateful villain lunges to stab the hero in the back or something, and only then does the villain die—either by a self-inflicted accident, or else killed by the hero in immediate self-defense. Either way, the hero walks away with victory and honor.

In practice, it’s a tall order to arrange all of that. This explains why sociopaths are so hard to defeat, and why I feel so bleak and depressed whenever I see one flaunting his power. But, you know, the great upside of pessimism is that it doesn’t take much to beat your expectations! Whenever a single sociopath is cleanly and honorably defeated, or even just rendered irrelevant—no matter that the sociopath’s friends and allies are still in power, no matter that they’ll be back to fight another day, etc. etc.—it’s a genuine occasion for rejoicing.

Anyway, that pretty much sums up my thoughts regarding Arthur Chu. In other news, hooray about the election!