Archive for the ‘Complexity’ Category

Gaussian BosonSampling, higher-order correlations, and spoofing: An update

Sunday, October 10th, 2021

In my last post, I wrote (among other things) about an ongoing scientific debate between the group of Chaoyang Lu at USTC in China, which over the past year has been doing experiments that seek to demonstrate quantum supremacy via Gaussian BosonSampling; and the group of Sergio Boixo at Google, which had a recent paper on a polynomial-time classical algorithm to sample approximately from the same distributions.  I reported the facts as I understood them at the time.  Since then, though, a long call with the Google team gave me a new and different understanding, and I feel duty-bound to share that here.

A week ago, I considered it obvious that if, using a classical spoofer, you could beat the USTC experiment on a metric like total variation distance from the ideal distribution, then you would’ve completely destroyed USTC’s claim of quantum supremacy.  The reason I believed that, in turn, is a proposition that I hadn’t given a name but needs one, so let me call it Hypothesis H:

The only way a classical algorithm to spoof BosonSampling can possibly do well in total variation distance, is by correctly reproducing the high-order correlations (correlations among the occupation numbers of large numbers of modes) — because that’s where the complexity of BosonSampling lies (if it lies anywhere).

Hypothesis H had important downstream consequences.  Google’s algorithm, by the Google team’s own admission, does not reproduce the high-order correlations.  Furthermore, because of limitations on both samples and classical computation time, Google’s paper calculates the total variation distance from the ideal distribution only on the marginal distribution on roughly 14 out of 144 modes.  On that marginal distribution, Google’s algorithm does do better than the experiment in total variation distance.  Google presents a claimed extrapolation to the full 144 modes, but eyeballing the graphs, it was far from clear to me what would happen: like, maybe the spoofing algorithm would continue to win, but maybe the experiment would turn around and win; who knows?

Chaoyang, meanwhile, made a clear prediction that the experiment would turn around and win, because of

  1. the experiment’s success in reproducing the high-order correlations,
  2. the admitted failure of Google’s algorithm in reproducing the high-order correlations, and
  3. the seeming impossibility of doing well on BosonSampling without reproducing the high-order correlations (Hypothesis H).

Given everything my experience told me about the central importance of high-order correlations for BosonSampling, I was inclined to agree with Chaoyang.

Now for the kicker: it seems that Hypothesis H is false.  A classical spoofer could beat a BosonSampling experiment on total variation distance from the ideal distribution, without even bothering to reproduce the high-order correlations correctly.

This is true because of a combination of two facts about the existing noisy BosonSampling experiments.  The first fact is that the contribution from the order-k correlations falls off like 1/exp(k).  The second fact is that, due to calibration errors and the like, the experiments already show significant deviations from the ideal distribution on the order-1 and order-2 correlations.

Put these facts together and what do you find?  Well, suppose your classical spoofing algorithm takes care to get the low-order contributions to the distribution exactly right.  Just for that reason alone, it could already win over a noisy BosonSampling experiment, as judged by benchmarks like total variation distance from the ideal distribution, or for that matter linear cross-entropy.  Yes, the experiment will beat the classical simulation on the higher-order correlations.  But because those higher-order correlations are exponentially attenuated anyway, they won’t be enough to make up the difference.  The experiment’s lack of perfection on the low-order correlations will swamp everything else.

Granted, I still don’t know for sure that this is what happens — that depends on whether I believe Sergio or Chaoyang about the extrapolation of the variation distance to the full 144 modes (my own eyeballs having failed to render a verdict!).  But I now see that it’s logically possible, maybe even plausible.

So, let’s imagine for the sake of argument that Google’s simulation wins on variation distance, even though the experiment wins on the high-order correlations.  In that case, what would be our verdict: would USTC have achieved quantum supremacy via BosonSampling, or not?

It’s clear what each side could say.

Google could say: by a metric that Scott Aaronson, the coinventor of BosonSampling, thought was perfectly adequate as late as last week — namely, total variation distance from the ideal distribution — we won.  We achieved lower variation distance than USTC’s experiment, and we did it using a fast classical algorithm.  End of discussion.  No moving the goalposts after the fact.

Google could even add: BosonSampling is a sampling task; it’s right there in the name!  The only purpose of any benchmark — whether Linear XEB or high-order correlation — is to give evidence about whether you are or aren’t sampling from a distribution close to the ideal one.  But that means that, if you accept that we are doing the latter better than the experiment, then there’s nothing more to argue about.

USTC could respond: even if Scott Aaronson is the coinventor of BosonSampling, he’s extremely far from an infallible oracle.  In the case at hand, his lack of appreciation for the sources of error in realistic experiments caused him to fixate inappropriately on variation distance as the success criterion.  If you want to see the quantum advantage in our system, you have to deliberately subtract off the low-order correlations and look at the high-order correlations.

USTC could add: from the very beginning, the whole point of quantum supremacy experiments was to demonstrate a clear speedup on some benchmark — we never particularly cared which one!  That horse is out of the barn as soon as we’re talking about quantum supremacy at all — something the Google group, which itself reported the first quantum supremacy experiment in Fall 2019, again for a completely artificial benchmark — knows as well as anyone else.  (The Google team even has experience with adjusting benchmarks: when, for example, Pan and Zhang pointed out that Linear XEB as originally specified is pretty easy to spoof for random 2D circuits, the most cogent rejoinder was: OK, fine then, add an extra check that the returned samples are sufficiently different from one another, which kills Pan and Zhang’s spoofing strategy.) In that case, then, why isn’t a benchmark tailored to the high-order correlations as good as variation distance or linear cross-entropy or any other benchmark?

Both positions are reasonable and have merit — though I confess to somewhat greater sympathy for the one that appeals to my doofosity rather than my supposed infallibility!

OK, but suppose, again for the sake of argument, that we accepted the second position, and we said that USTC gets to declare quantum supremacy as long as its experiment does better than any known classical simulation at reproducing the high-order correlations.  We’d still face the question: does the USTC experiment, in fact, do better on that metric?  It would be awkward if, having won the right to change the rules in its favor, USTC still lost even under the new rules.

Sergio tells me that USTC directly reported experimental data only for up to order-7 correlations, and at least individually, the order-7 correlations are easy to reproduce on a laptop (although sampling in a way that reproduces the order-7 correlations might still be hard—a point that Chaoyang confirms, and where further research would be great). OK, but USTC also reported that their experiment seems to reproduce up to order-19 correlations. And order-19 correlations, the Google team agrees, are hard to sample consistently with on a classical computer by any currently known algorithm.

So then, why don’t we have direct data for the order-19 correlations?  The trouble is simply that it would’ve taken USTC an astronomical amount of computation time.  So instead, they relied on a statistical extrapolation from the observed strength of the lower-order correlations — there we go again with the extrapolations!  Of course, if we’re going to let Google rest its case on an extrapolation, then maybe it’s only sporting to let USTC do the same.

You might wonder: why didn’t we have to worry about any of this stuff with the other path to quantum supremacy, the one via random circuit sampling with superconducting qubits?  The reason is that, with random circuit sampling, all the correlations except the highest-order ones are completely trivial — or, to say it another way, the reduced state of any small number of output qubits is exponentially close to the maximally mixed state.  This is a real difference between BosonSampling and random circuit sampling—and even 5-6 years ago, we knew that this represented an advantage for random circuit sampling, although I now have a deeper appreciation for just how great of an advantage it is.  For it means that, with random circuit sampling, it’s easier to place a “sword in the stone”: to say, for example, here is the Linear XEB score achieved by the trivial classical algorithm that outputs random bits, and lo, our experiment achieves a higher score, and lo, we challenge anyone to invent a fast classical spoofing method that achieves a similarly high score.

With BosonSampling, by contrast, we have various metrics with which to judge performance, but so far, for none of those metrics do we have a plausible hypothesis that says “here’s the best that any polynomial-time classical algorithm can possibly hope to do, and it’s completely plausible that even a noisy current or planned BosonSampling experiment can do better than that.”

In the end, then, I come back to the exact same three goals I would’ve recommended a week ago for the future of quantum supremacy experiments, but with all of them now even more acutely important than before:

  1. Experimentally, to increase the fidelity of the devices (with BosonSampling, for example, to observe a larger contribution from the high-order correlations) — a much more urgent goal, from the standpoint of evading classical spoofing algorithms, than further increasing the dimensionality of the Hilbert space.
  2. Theoretically, to design better ways to verify the results of sampling-based quantum supremacy experiments classically — ideally, even ways that could be applied via polynomial-time tests.
  3. For Gaussian BosonSampling in particular, to get a better understanding of the plausible limits of classical spoofing algorithms, and exactly how good a noisy device needs to be before it exceeds those limits.

Thanks so much to Sergio Boixo and Ben Villalonga for the conversation, and to Chaoyang Lu and Jelmer Renema for comments on this post. Needless to say, any remaining errors are my own.

The Physics Nobel, Gaussian BosonSampling, and Dorian Abbot

Tuesday, October 5th, 2021

1. Huge congratulations to the winners of this year’s Nobel Prize in Physics: Syukuro Manabe and Klaus Hasselmann for climate modelling, and separately, Giorgio Parisi for statistical physics. While I don’t know the others, I had the great honor to get to know Parisi three years ago, when he was chair of the committee that awarded me the Tomassoni-Chisesi Prize in Physics, and when I visited Parisi’s department at Sapienza University of Rome to give the prize lecture and collect the award. I remember Parisi’s kindness, a lot of good food, and a lot of discussion of the interplay between theoretical computer science and physics. Note that, while much of Parisi’s work is beyond my competence to comment on, in computer science he’s very well-known for applying statistical physics methods to the analysis of survey propagation—an algorithm that revolutionized the study of random 3SAT when it was introduced two decades ago.


2. Two weeks ago, a group at Google put out a paper with a new efficient classical algorithm to simulate the recent Gaussian BosonSampling experiments from USTC in China. They argued that this algorithm called into question USTC’s claim of BosonSampling-based quantum supremacy. Since then, I’ve been in contact with Sergio Boixo from Google, Chaoyang Lu from USTC, and Jelmer Renema, a Dutch BosonSampling expert and friend of the blog, to try to get to the bottom of this. Very briefly, the situation seems to be that Google’s new algorithm outperforms the USTC experiment on one particular metric: namely, total variation distance from the ideal marginal distribution, if (crucially) you look at only a subset of the optical modes, say 14 modes out of 144 total. Meanwhile, though, if you look at the kth-order correlations for large values of k, then the USTC experiment continues to win. With the experiment, the correlations fall off exponentially with k but still have a meaningful, detectable signal even for (say) k=19, whereas with Google’s spoofing algorithm, you choose the k that you want to spoof (say, 2 or 3), and then the correlations become nonsense for larger k.

Now, given that you were only ever supposed to see a quantum advantage from BosonSampling if you looked at the kth-order correlations for large values of k, and given that we already knew, from the work of Leonid Gurvits, that very small marginals in BosonSampling experiments would be easy to reproduce on a classical computer, my inclination is to say that USTC’s claim of BosonSampling-based quantum supremacy still stands. On the other hand, it’s true that, with BosonSampling especially, more so than with qubit-based random circuit sampling, we currently lack an adequate theoretical understanding of what the target should be. That is, which numerical metric should an experiment aim to maximize, and how well does it have to score on that metric before it’s plausibly outperforming any fast classical algorithm? One thing I feel confident about is that, whichever metric is chosen—Linear Cross-Entropy or whatever else—it needs to capture the kth-order correlations for large values of k. No metric that’s insensitive to those correlations is good enough.


3. Like many others, I was outraged and depressed that MIT uninvited Dorian Abbot (see also here), a geophysicist at the University of Chicago, who was slated to give the Carlson Lecture in the Department of Earth, Atmospheric, and Planetary Sciences about the atmospheres of extrasolar planets. The reason for the cancellation was that, totally unrelatedly to his scheduled lecture, Abbot had argued in Newsweek and elsewhere that Diversity, Equity, and Inclusion initiatives should aim for equality for opportunity rather than equality of outcomes, a Twitter-mob decided to go after him in retaliation, and they succeeded. It should go without saying that it’s perfectly reasonable to disagree with Abbot’s stance, to counterargue—if those very concepts haven’t gone the way of floppy disks. It should also go without saying that the MIT EAPS department chair is free to bow to social-media pressure, as he did, rather than standing on principle … just like I’m free to criticize him for it. To my mind, though, cancelling a scientific talk because of the speaker’s centrist (!) political views completely, 100% validates the right’s narrative about academia, that it’s become a fanatically intolerant echo chamber. To my fellow progressive academics, I beseech thee in the bowels of Bertrand Russell: why would you commit such an unforced error?

Yes, one can imagine views (e.g., open Nazism) so hateful that they might justify the cancellation of unrelated scientific lectures by people who hold those views, as many physicists after WWII refused to speak to Werner Heisenberg. But it seems obvious to me—as it would’ve been obvious to everyone else not long ago—that no matter where a reasonable person draws the line, Abbot’s views as he expressed them in Newsweek don’t come within a hundred miles of it. To be more explicit still: if Abbot’s views justify deplatforming him as a planetary scientist, then all my quantum computing and theoretical computer science lectures deserve to be cancelled too, for the many attempts I’ve made on this blog over the past 16 years to share my honest thoughts and life experiences, to write like a vulnerable human being rather than like a university press office. While I’m sure some sneerers gleefully embrace that implication, I ask everyone else to consider how deeply they believe in the idea of academic freedom at all—keeping in mind that such a commitment only ever gets tested when there’s a chance someone might denounce you for it.

Update: Princeton’s James Madison Program has volunteered to host Abbot’s Zoom talk in place of MIT. The talk is entitled “Climate and the Potential for Life on Other Planets.” Like probably hundreds of others who heard about this only because of the attempted cancellation, I plan to attend!

Unrelated Bonus Update: Here’s a neat YouTube video put together by the ACM about me as well as David Silver of AlphaGo and AlphaZero, on the occasion of our ACM Prizes in Computing.

My ACM TechTalk on quantum supremadvantage

Wednesday, September 15th, 2021

This Erev Yom Kippur, I wish to repent for not putting enough quantum computing content on this blog. Of course, repentance is meaningless unless accompanied by genuine reform. That being the case, please enjoy the YouTube video of my ACM TechTalk from last week about quantum supremacy—although, as you’ll see if you watch the thing, I oscillate between quantum supremacy and other terms like “quantum advantage” and even “quantum supremadvantage.” This represents the first time ever that I got pushback about a talk before I’d delivered it for political reasons—the social-justice people, it turns out, are actually serious about wanting to ban the term “quantum supremacy”—but my desire to point out all the difficulties with their proposal competed with my desire not to let that issue overshadow my talk.

And there’s plenty to talk about! While regular Shtetl-Optimized readers will have already heard (or read) most of what I say, I make some new comments, including about the new paper from last week, the night before my talk (!), by the USTC group in China, where they report a quantum supremacy experiment based on random circuit sampling with a superconducting chip, this time with a record-setting 60 qubits and 24 layers of gates. On the other hand, I also stress how increasing the circuit fidelity has become a much more urgent issue than further increasing the number of qubits (or in the case of BosonSampling, the number of photons), if our goal is for these experiments to remain a couple steps ahead of classical spoofing algorithms.

Anyway, I hope you enjoy my lovingly handcrafted visuals. Over the course of this pandemic, I’ve become a full convert to writing out my talks with a stylus pen rather than PowerPointing them—not only is it faster for me, not only does it allow for continuous scrolling rather than arbitrary divisions into slides, but it enforces simplicity and concision in ways they should be enforced.

While there was only time for me to field a few questions at the end of the talk, I later supplied written answers to 52 questions (!!) that I hadn’t gotten to. If you have a question, please check to see if it’s already there, and otherwise ask away in the comments!

Thanks so much to Yan Timanovsky for inviting and organizing this talk, and to whurley for hosting it.

Open Problems Related to Quantum Query Complexity

Tuesday, September 14th, 2021

Way back in 2005, I posed Ten Semi-Grand Challenges for Quantum Computing Theory, on at least half of which I’d say there’s been dramatic progress in the 16 years since (most of the challenges were open-ended, so that it’s unclear when to count them as “solved”). I posed more open quantum complexity problems in 2010, and some classical complexity problems in 2011. In the latter cases, I’d say there’s been dramatic progress on about a third of the problems. I won’t go through the problems one by one, but feel free to ask in the comments about any that interest you.

Shall I push my luck as a problem-poser? Shall or shall not, I have.

My impetus, this time around, was a kind invitation by Travis Humble, the editor-in-chief of the new ACM Transactions on Quantum Computing, to contribute a perspective piece to that journal on the occasion of my ACM Prize. I agreed—but only on the condition that, rather than ponderously pontificate about the direction of the field, I could simply discuss a bunch of open problems that I wanted to see solved. The result is below. It’s coming soon to an arXiv near you, but Shtetl-Optimized readers get it first.

Open Problems Related to Quantum Query Complexity (11 pages, PDF)

by Scott Aaronson

Abstract: I offer a case that quantum query complexity still has loads of enticing and fundamental open problems—from relativized QMA versus QCMA and BQP versus IP, to time/space tradeoffs for collision and element distinctness, to polynomial degree versus quantum query complexity for partial functions, to the Unitary Synthesis Problem and more.

Some of the problems on my new hit-list are ones that I and others have flogged for years or even decades, but others, as far as I know, appear here for the first time. If your favorite quantum query complexity open problem, or a problem I’ve discussed in the past, is missing, that doesn’t mean that it’s been solved or is no longer interesting—it might mean I simply ran out of time or energy before I got to it.

Enjoy! And tell me what I missed or got wrong or has a trivial solution that I overlooked.

Yet more mistakes in papers

Tuesday, August 10th, 2021

Amazing Update (Aug. 19): My former PhD student Daniel Grier tells me that he, Sergey Bravyi, and David Gosset have an arXiv preprint, from February, where they give a corrected proof of my and Andris Ambainis’s claim that any k-query quantum algorithm can be simulated by an O (N1-1/2k)-query classical randomized algorithm (albeit, not of our stronger statement, about a randomized algorithm to estimate any bounded low-degree real polynomial). The reason I hadn’t known about this is that they don’t mention it in the abstract of their paper (!!). But it’s right there in Theorem 5.


In my last post, I came down pretty hard on the blankfaces: people who relish their power to persist in easily-correctable errors, to the detriment of those subject to their authority. The sad truth, though, is that I don’t obviously do better than your average blankface in my ability to resist falsehoods on early encounter with them. As one of many examples that readers of this blog might know, I didn’t think covid seemed like a big deal in early February 2020—although by mid-to-late February 2020, I’d repented of my doofosity. If I have any tool with which to unblank my face, then it’s only my extreme self-consciousness when confronted with evidence of my own stupidities—the way I’ve trained myself over decades in science to see error-correction as a or even the fundamental virtue.

Which brings me to today’s post. Continuing what’s become a Shtetl-Optimized tradition—see here from 2014, here from 2016, here from 2017—I’m going to fess up to two serious mistakes in research papers on which I was a coauthor.


In 2015, Andris Ambainis and I had a STOC paper entitled Forrelation: A Problem that Optimally Separates Quantum from Classical Computing. We gave two main results there:

  1. A Ω((√N)/log(N)) lower bound on the randomized query complexity of my “Forrelation” problem, which was known to be solvable with only a single quantum query.
  2. A proposed way to take any k-query quantum algorithm that queries an N-bit string, and simulate it using only O(N1-1/2k) classical randomized queries.

Later, Bansal and Sinha and independently Sherstov, Storozhenko, and Wu showed that a k-query generalization of Forrelation, which I’d also defined, requires ~Ω(N1-1/2k) classical randomized queries, in line with my and Andris’s conjecture that k-fold Forrelation optimally separates quantum and classical query complexities.

A couple months ago, alas, my former grad school officemate Andrej Bogdanov, along with Tsun Ming Cheung and Krishnamoorthy Dinesh, emailed me and Andris to say that they’d discovered an error in result 2 of our paper (result 1, along with the Bansal-Sinha and Sherstov-Storozhenko-Wu extensions of it, remained fine). So, adding our own names, we’ve now posted a preprint on ECCC that explains the error, while also showing how to recover our result for the special case k=1: that is, any 1-query quantum algorithm really can be simulated using only O(√N) classical randomized queries.

Read the preprint if you really want to know the details of the error, but to summarize it in my words: Andris and I used a trick that we called “variable-splitting” to handle variables that have way more influence than average on the algorithm’s acceptance probability. Alas, variable-splitting fails to take care of a situation where there are a bunch of variables that are non-influential individually, but that on some unusual input string, can “conspire” in such a way that their signs all line up and their contribution overwhelms those from the other variables. A single mistaken inequality fooled us into thinking such cases were handled, but an explicit counterexample makes the issue obvious.

I still conjecture that my original guess was right: that is, I conjecture that any problem solvable with k quantum queries is solvable with O(N1-1/2k) classical randomized queries, so that k-fold Forrelation is the extremal example, and so that no problem has constant quantum query complexity but linear randomized query complexity. More strongly, I reiterate the conjecture that any bounded degree-d real polynomial, p:{0,1}N→[0,1], can be approximated by querying only O(N1-1/d) input bits drawn from some suitable distribution. But proving these conjectures, if they’re true, will require a new algorithmic idea.


Now for the second mea culpa. Earlier this year, my student Sabee Grewal and I posted a short preprint on the arXiv entitled Efficient Learning of Non-Interacting Fermion Distributions. In it, we claimed to give a classical algorithm for reconstructing any “free fermionic state” |ψ⟩—that is, a state of n identical fermionic particles, like electrons, each occupying one of m>n possible modes, that can be produced using only “fermionic beamsplitters” and no interaction terms—and for doing so in polynomial time and using a polynomial number of samples (i.e., measurements of where all the fermions are, given a copy of |ψ⟩). Alas, after trying to reply to confused comments from readers and reviewers (albeit, none of them exactly putting their finger on the problem), Sabee and I were able to figure out that we’d done no such thing.

Let me explain the error, since it’s actually really interesting. In our underlying problem, we’re trying to find a collection of unit vectors, call them |v1⟩,…,|vm⟩, in Cn. Here, again, n is the number of fermions and m>n is the number of modes. By measuring the “2-mode correlations” (i.e., the probability of finding a fermion in both mode i and mode j), we can figure out the approximate value of |⟨vi|vj⟩|—i.e., the absolute value of the inner product—for any i≠j. From that information, we want to recover |v1⟩,…,|vm⟩ themselves—or rather, their relative configuration in n-dimensional space, isometries being irrelevant.

It seemed to me and Sabee that, if we knew ⟨vi|vj⟩ for all i≠j, then we’d get linear equations that iteratively constrained each |vj⟩ in terms of ⟨vi|vj⟩ for j<i, so all we’d need to do is solve those linear systems, and then (crucially, and this was the main work we did) show that the solution would be robust with respect to small errors in our estimates of each ⟨vi|vj⟩. It seemed further to us that, while it was true that the measurements only revealed |⟨vi|vj⟩| rather than ⟨vi|vj⟩ itself, the “phase information” in ⟨vi|vj⟩ was manifestly irrelevant, as it in any case depended on the irrelevant global phases of |vi⟩ and |vj⟩ themselves.

Alas, it turns out that the phase information does matter. As an example, suppose I told you only the following about three unit vectors |u⟩,|v⟩,|w⟩ in R3:

|⟨u|v⟩| = |⟨u|w⟩| = |⟨v|w⟩| = 1/2.

Have I thereby determined these vectors up to isometry? Nope! In one class of solution, all three vectors belong to the same plane, like so:

|u⟩=(1,0,0),
|v⟩=(1/2,(√3)/2,0),
|w⟩=(-1/2,(√3)/2,0).

In a completely different class of solution, the three vectors don’t belong to the same plane, and instead look like three edges of a tetrahedron meeting at a vertex:

|u⟩=(1,0,0),
|v⟩=(1/2,(√3)/2,0),
|w⟩=(1/2,1/(2√3),√(2/3)).

These solutions correspond to different sign choices for |⟨u|v⟩|, |⟨u|w⟩|, and |⟨v|w⟩|—choices that collectively matter, even though each of them is individually irrelevant.

It follows that, even in the special case where the vectors are all real, the 2-mode correlations are not enough information to determine the vectors’ relative positions. (Well, it takes some more work to convert this to a counterexample that could actually arise in the fermion problem, but that work can be done.) And alas, the situation gets even gnarlier when, as for us, the vectors can be complex.

Any possible algorithm for our problem will have to solve a system of nonlinear equations (albeit, a massively overconstrained system that’s guaranteed to have a solution), and it will have to use 3-mode correlations (i.e., statistics of triples of fermions), and quite possibly 4-mode correlations and above.

But now comes the good news! Googling revealed that, for reasons having nothing to do with fermions or quantum physics, problems extremely close to ours had already been studied in classical machine learning. The key term here is “Determinantal Point Processes” (DPPs). A DPP is a model where you specify an m×m matrix A (typically symmetric or Hermitian), and then the probabilities of various events are given by the determinants of various principal minors of A. Which is precisely what happens with fermions! In terms of the vectors |v1⟩,…,|vm⟩ that I was talking about before, to make this connection we simply let A be the m×m covariance matrix, whose (i,j) entry equals ⟨vi|vj⟩.

I first learned of this remarkable correspondence between fermions and DPPs a decade ago, from a talk on DPPs that Ben Taskar gave at MIT. Immediately after the talk, I made a mental note that Taskar was a rising star in theoretical machine learning, and that his work would probably be relevant to me in the future. While researching this summer, I was devastated to learn that Taskar died of heart failure in 2013, in his mid-30s and only a couple of years after I’d heard him speak.

The most relevant paper for me and Sabee was called An Efficient Algorithm for the Symmetric Principal Minor Assignment Problem, by Rising, Kulesza, and Taskar. Using a combinatorial algorithm based on minimum spanning trees and chordless cycles, this paper nearly solves our problem, except for two minor details:

  1. It doesn’t do an error analysis, and
  2. It considers complex symmetric matrices, whereas our matrix A is Hermitian (i.e., it equals its conjugate transpose, not its transpose).

So I decided to email Alex Kulezsa, one of Taskar’s surviving collaborators who’s now a research scientist at Google NYC, to ask his thoughts about the Hermitian case. Alex kindly replied that they’d been meaning to study that case—a reviewer had even asked about it!—but they’d ran into difficulties and didn’t know what it was good for. I asked Alex whether he’d like to join forces with me and Sabee in tackling the Hermitian case, which (I told him) was enormously relevant in quantum physics. To my surprise and delight, Alex agreed.

So we’ve been working on the problem together, making progress, and I’m optimistic that we’ll have some nice result. By using the 3-mode correlations, at least “generically” we can recover the entries of the matrix A up to complex conjugation, but further ideas will be needed to resolve the complex conjugation ambiguity, to whatever extent it actually matters.

In short: on the negative side, there’s much more to the problem of learning a fermionic state than we’d realized. But on the positive side, there’s much more to the problem than we’d realized! As with the simulation of k-query quantum algorithms, my coauthors and I would welcome any ideas. And I apologize to anyone who was misled by our premature (and hereby retracted) claims.


Update (Aug. 11): Here’s a third bonus retraction, which I thank my colleague Mark Wilde for bringing to my attention. Way back in 2005, in my NP-complete Problems and Physical Reality survey article, I “left it as an exercise for the reader” to prove that BQPCTC, or quantum polynomial time augmented with Deutschian closed timelike curves, is contained in a complexity class called SQG (Short Quantum Games). While it turns out to be true that BQPCTC ⊆ SQG—as follows from my and Watrous’s 2008 result that BQPCTC = PSPACE, combined with Gutoski and Wu’s 2010 result that SQG = PSPACE—it’s not something for which I could possibly have had a correct proof back in 2005. I.e., it was a harder exercise than I’d intended!

Slowly emerging from blog-hibervacation

Wednesday, July 21st, 2021

Alright everyone:

  1. Victor Galitski has an impassioned rant against out-of-control quantum computing hype, which I enjoyed and enthusiastically recommend, although I wished Galitski had engaged a bit more with the strongest arguments for optimism (e.g., the recent sampling-based supremacy experiments, the extrapolations that show gate fidelities crossing the fault-tolerance threshold within the next decade). Even if I’ve been saying similar things on this blog for 15 years, I clearly haven’t been doing so in a style that works for everyone. Quantum information needs as many people as possible who will tell the truth as best they see it, unencumbered by any competing interests, and has nothing legitimate to fear from that. The modern intersection of quantum theory and computer science has raised profound scientific questions that will be with us for decades to come. It’s a lily that need not be gilded with hype.
  2. Last month Limaye, Srinivasan, and Tavenas posted an exciting preprint to ECCC, which apparently proves the first (slightly) superpolynomial lower bound on the size of constant-depth arithmetic circuits, over fields of characteristic 0. Assuming it’s correct, this is another small victory in the generations-long war against the P vs. NP problem.
  3. I’m grateful to the Texas Democratic legislators who fled the state to prevent the legislature, a couple miles from my house, having a quorum to enact new voting restrictions, and who thereby drew national attention to the enormity of what’s at stake. It should go without saying that, if a minority gets to rule indefinitely by forcing through laws to suppress the votes of a majority that would otherwise unseat it, thereby giving itself the power to force through more such laws, etc., then we no longer live in a democracy but in a banana republic. And there’s no symmetry to the situation: no matter how terrified you (or I) might feel about wokeists and their denunciation campaigns, the Democrats have no comparable effort to suppress Republican votes. Alas, I don’t know of any solutions beyond the obvious one, of trying to deal the conspiracy-addled grievance party crushing defeats in 2022 and 2024.
  4. Added: Here’s the video of my recent Astral Codex Ten ask-me-anything session.

Open thread on new quantum supremacy claims

Sunday, July 4th, 2021

Happy 4th to those in the US!

The group of Chaoyang Lu and Jianwei Pan, based at USTC in China, has been on a serious quantum supremacy tear lately. Recall that last December, USTC announced the achievement of quantum supremacy via Gaussian BosonSampling, with 50-70 detected photons—the second claim of sampling-based quantum supremacy, after Google’s in Fall 2019. However, skeptics then poked holes in the USTC claim, showing how they could spoof the results with a classical computer, basically by reproducing the k-photon correlations for relatively small values of k. Debate over the details continues, but the Chinese group seeks to render the debate largely moot with a new and better Gaussian BosonSampling experiment, with 144 modes and up to 113 detected photons. They say they were able to measure k-photon correlations for k up to about 19, which if true would constitute a serious obstacle to the classical simulation strategies that people discussed for the previous experiment.

In the meantime, though, an overlapping group of authors had put out another paper the day before (!) reporting a sampling-based quantum supremacy experiment using superconducting qubits—extremely similar to what Google did (the same circuit depth and everything), except now with 56 qubits rather than 53.

I confess that I haven’t yet studied either paper in detail—among other reasons, because I’m on vacation with my family at the beach, and because I’m trying to spend what work-time I have on my own projects. But anyone who has read them, please use the comments of this post to discuss! Hopefully I’ll learn something.

To confine myself to some general comments: since Google’s announcement in Fall 2019, I’ve consistently said that sampling-based quantum supremacy is not yet a done deal. I’ve said that quantum supremacy seems important enough to want independent replications, and demonstrations in other hardware platforms like ion traps and photonics, and better gate fidelity, and better classical hardness, and better verification protocols. Most of all, I’ve said that we needed a genuine dialogue between the “quantum supremacists” and the classical skeptics: the former doing experiments and releasing all their data, the latter trying to design efficient classical simulations for those experiments, and so on in an iterative process. Just like in applied cryptography, we’d only have real confidence in a quantum supremacy claim once it had survived at least a few years of attacks by skeptics. So I’m delighted that this is precisely what’s now happening. USTC’s papers are two new volleys in this back-and-forth; we all eagerly await the next volley, whichever side it comes from.

While I’ve been trying for years to move away from the expectation that I blog about each and every QC announcement that someone messages me about, maybe I’ll also say a word about the recent announcement by IBM of a quantum advantage in space complexity (see here for popular article and here for arXiv preprint). There appears to be a nice theoretical result here, about the ability to evaluate any symmetric Boolean function with a single qubit in a branching-program-like model. I’d love to understand that result better. But to answer the question I received, this is another case where, once you know the protocol, you know both that the experiment can be done and exactly what its result will be (namely, the thing predicted by QM). So I think the interest is almost entirely in the protocol itself.

STOC’2021 and BosonSampling

Wednesday, June 23rd, 2021

Happy birthday to Alan Turing!

This week I’m participating virtually in STOC’2021, which today had a celebration of the 50th anniversary of NP-completeness (featuring Steve Cook, Richard Karp, Leonid Levin, Christos Papadimitriou, and Avi Wigderson), and which tomorrow will have a day’s worth of quantum computing content, including a tutorial on MIP*=RE, two quantum sessions, and an invited talk on quantum supremacy by John Martinis. I confess that I’m not a fan of GatherTown, the platform being used for STOC. Basically, you get a little avatar who wanders around a virtual hotel lobby and enters sessions—but it seems to reproduce all of the frustrating and annoying parts of experience without any of the good parts.

Ah! But I got the surprising news that Alex Arkhipov and I are among the winners of STOC’s first-ever “Test of Time Award,” for our paper on BosonSampling. It feels strange to win a “Test of Time” award for work that we did in 2011, which still seems like yesterday to me. All the more since the experimental status and prospects of quantum supremacy via BosonSampling are still very much live, unresolved questions.

Speaking of which: on Monday, Alexey Rubtsov, of the Skolkovo Institute in Moscow, gave a talk for our quantum information group meeting at UT, about his recent work with Popova on classically simulating Gaussian BosonSampling. From the talk, I learned something extremely important. I had imagined that their simulation must take advantage of the high rate of photon loss in actual experiments (like the USTC experiment from late 2020), because how else are you going to simulate BosonSampling efficiently? But Rubtsov explained that that’s not how it works at all. While their algorithm is heuristic and remains to be rigorously analyzed, numerical studies suggest that it works even with no photon losses or other errors. Having said that, their algorithm works:

  • only for Gaussian BosonSampling, not Fock-state BosonSampling (as Arkhipov and I had originally proposed),
  • only for threshold detectors, not photon-counting detectors, and
  • only for a small number of modes (say, linear in the number of photons), not for a large number of modes (say, quadratic in the number of photons) as in the original proposal.

So, bottom line, it now looks like the USTC experiment, amazing engineering achievement though it was, is not hard to spoof with a classical computer. If so, this is because of multiple ways in which the experiment differed from my and Arkhipov’s original theoretical proposal. We know exactly what those ways are—indeed, you can find them in my earlier blog posts on the subject—and hopefully they can be addressed in future experiments. All in all, then, we’re left with a powerful demonstration of the continuing relevance of formal hardness reductions, and the danger of replacing them with intuitions and “well, it still seems hard to me.” So I hope the committee won’t rescind my and Arkhipov’s Test of Time Award based on these developments in the past couple weeks!

More quantum computing popularization!

Tuesday, June 8th, 2021

I now have a feature article up at Quanta magazine, entitled “What Makes Quantum Computing So Hard To Explain?” I.e., why do journalists, investors, etc. so consistently get central points wrong, even after the subject has been in public consciousness for more than 25 years? Perhaps unsurprisingly, I found it hard to discuss that meta-level question, as Quanta‘s editors asked me to do, without also engaging in the object-level task of actually explaining QC. For regular Shtetl-Optimized readers, there will be nothing new here, but I’m happy with how the piece turned out.

Accompanying the Quanta piece is a 10-minute YouTube explainer on quantum computing, which (besides snazzy graphics) features interviews with me, John Preskill, and Dorit Aharonov.

On a different note, my colleague Mark Wilde has recorded a punk-rock song about BosonSampling. I can honestly report that it’s some of the finest boson-themed music I’ve heard in years. It includes the following lyrics:

Quantum computer, Ain’t no loser
Quantum computer, Quantum computer

People out on the streets
They don’t know what it is
They think it finds the cliques
Or finds graph colorings
But it don’t solve anything
Said it don’t solve anything
Bosonic slot machine
My lil’ photonic dream

Speaking of BosonSampling, A. S. Popova and A. N. Rubtsov, of the Skolkovo Institute in Moscow, have a new preprint entitled Cracking the Quantum Advantage threshold for Gaussian Boson Sampling. In it, they claim to give an efficient classical algorithm to simulate noisy GBS experiments, like the one six months ago from USTC in China. I’m still unsure how well this scales from 30-40 photons up to 50-70 photons; which imperfections of the USTC experiment are primarily being taken advantage of (photon losses?); and how this relates to the earlier proposed classical algorithms for simulating noisy BosonSampling, like the one by Kalai and Kindler. Anyone with any insight is welcome to share!

OK, one last announcement: the Simons Institute for the Theory of Computing, in Berkeley, has a new online lecture series called “Breakthroughs,” which many readers of this blog might want to check out.

Doubts about teapot supremacy: my reply to Richard Borcherds

Tuesday, April 20th, 2021

Richard Borcherds is a British mathematician at Berkeley, who won the 1998 Fields Medal for the proof of the monstrous moonshine conjecture among many other contributions. A couple months ago, Borcherds posted on YouTube a self-described “rant” about quantum computing, which was recently making the rounds on Facebook and which I found highly entertaining.

Borcherds points out that the term “quantum supremacy” means only that quantum computers can outperform existing classical computers on some benchmark, which can be chosen to show maximum advantage for the quantum computer. He allows that BosonSampling could have some value, for example in calibrating quantum computers or in comparing one quantum computer to another, but he decries the popular conflation of quantum supremacy with the actual construction of a scalable quantum computer able (for example) to run Shor’s algorithm to break RSA.

Borcherds also proposes a “teapot test,” according to which any claim about quantum computers can be dismissed if an analogous claim would hold for a teapot (which he brandishes for the camera). For example, there are many claims to solve practical optimization and machine learning problems by “quantum/classical hybrid algorithms,” wherein a classical computer does most of the work but a quantum computer is somehow involved. Borcherds points out that, at least as things stand in early 2021, in most or all such cases, the classical computer could’ve probably done as well entirely on its own. So then if you put a teapot on top of your classical computer while it ran, you could equally say you used a “classical/teapot hybrid approach.”

Needless to say, Borcherds is correct about all of this. I’ve made similar points on this blog for 15 years, although less Britishly. I’m delighted to have such serious new firepower on the scoffing-at-QC-hype team.

I do, however, have one substantive disagreement. At one point, Borcherds argues that sampling-based quantum supremacy itself fails his teapot test. For consider the computational problem of predicting how many pieces a teapot will break into if it’s dropped on the ground. Clearly, he says, the teapot itself will outperform any simulation running on any existing classical computer at that task, and will therefore achieve “teapot supremacy.” But who cares??

I’m glad that Borcherds has set out, rather crisply, an objection that’s been put to me many times over the past decade. The response is simple: I don’t believe the teapot really does achieve teapot supremacy on the stated task! At the least, I’d need to be shown why. You can’t just assert it without serious argument.

If we want to mirror the existing quantum supremacy experiments, then the teapot computational problem, properly formulated, should be: given as input a description of a teapot’s construction, the height from which it’s dropped, etc., output a sample from the probability distribution over the number of shards that the teapot will break into when it hits the floor.

If so, though, then clearly a classical computer can easily sample from the same distribution! Why? Because presumably we agree that there’s a negligible probability of more than (say) 1000 shards. So the distribution is characterized by a list of at most 1000 probabilities, which can be estimated empirically (at the cost of a small warehouse of smashed teapots) and thereafter used to generate samples. In the plausible event that the distribution is (say) a Gaussian, it’s even easier: just estimate the mean and variance.

A couple days ago, I was curious what the distribution looked like, so I decided to order some teapots from Amazon and check. Unfortunately, real porcelain teapots are expensive, and it seemed vaguely horrific to order dozens (as would be needed to get reasonable data) for the sole purpose of smashing them on my driveway. So I hit on what seemed like a perfect solution: I ordered toy teapots, which were much smaller and cheaper. Alas, when my toy “porcelain” teapots arrived yesterday, they turned out (unsurprisingly in retrospect for a children’s toy) to be some sort of plastic or composite material, meaning that they didn’t break unless one propelled them downward forcefully. So, while I can report that they tended to break into one or two large pieces along with two or three smaller shards, I found it impossible to get better data. (There’s a reason why I became a theoretical computer scientist…)

The good news is that my 4-year-old son had an absolute blast smashing toy teapots with me on our driveway, while my 8-year-old daughter was thrilled to take the remaining, unbroken teapots for her dollhouse. I apologize if this fails to defy gender stereotypes.

Anyway, it might be retorted that it’s not good enough to sample from a probability distribution: what’s wanted, rather, is to calculate how many pieces this specific teapot will break into, given all the microscopic details of it and its environment. Aha, this brings us to a crucial conceptual point: in order for something to count as an “input” to a computer, you need to be able to set it freely. Certainly, at the least, you need to be able to measure and record the input in its entirety, so that someone trying to reproduce your computation on a standard silicon computer would know exactly which computation to do. You don’t get to claim computational supremacy based on a problem with secret inputs: that’s like failing someone on a math test without having fully told them the problems.

Ability to set and know the inputs is the key property that’s satisfied by Google’s quantum supremacy experiment, and to a lesser extent by the USTC BosonSampling experiment, but that’s not satisfied at all by the “smash a teapot on the floor” experiment. Or perhaps it’s better to say: influences on a computation that vary uncontrollably and chaotically, like gusts of air hitting the teapot as it falls to the floor, shouldn’t be called “inputs” at all; they’re simply noise sources. And what one does with noise sources is to try to estimate their distribution and average over them—but in that case, as I said, there’s no teapot supremacy.

A Facebook friend said to me: that’s well and good, but surely we could change Borcherds’s teapot experiment to address this worry? For example: add a computer-controlled lathe (or even a 3D printer), with which you can build a teapot in an arbitrary shape of your choice. Then consider the problem of sampling from the probability distribution over how many pieces that teapot will smash into, when it’s dropped from some standard height onto some standard surface. I replied that this is indeed more interesting—in fact, it already seems more like what engineers do in practice (still, sometimes!) when building wind tunnels, than like a silly reductio ad absurdum of quantum supremacy experiments. On the other hand, if you believe the Extended Church-Turing Thesis, then as long as your analog computer is governed by classical physics, it’s presumably inherently limited to an Avogadro’s number type speedup over a standard digital computer, whereas with a quantum computer, you’re limited only by the exponential dimensionality of Hilbert space, which seems more interesting.

Or maybe I’m wrong—in which case, I look forward to the first practical demonstration of teapot supremacy! Just like with quantum supremacy, though, it’s not enough to assert it; you need to … put the tea where your mouth is.

Update: On the suggestion of Ernest Davis, who I can now reveal as the Facebook friend mentioned above, I just ordered some terra cotta flower pots, which look cheap, easily smashable, and environmentally friendly, and which will hopefully be acceptable substitutes for porcelain teapots in a new experiment. (Not that my main arguments in this post hinge on the results of such an experiment! That’s the power of theory.)

Another Update: Some of you might enjoy John Horgan’s Scientific American column on reality vs. hype in quantum computing, based on conversations with me and with Terry Rudolph of PsiQuantum.