Archive for December, 2020

My vaccine crackpottery: a confession

Thursday, December 31st, 2020

I hope everyone is enjoying a New Years’ as festive as the circumstances allow!

I’ve heard from a bunch of you awaiting my next post on the continuum hypothesis, and it’s a-comin’, but I confess the new, faster-spreading covid variant is giving me the same sinking feeling that Covid 1.0 gave me in late February, making it really hard to think about the eternal. (For perspectives on Covid 2.0 from individuals who acquitted themselves well with their early warnings about Covid 1.0, see for example this by Jacob Falkovich, or this by Zvi Mowshowitz.)

So on that note: do you hold any opinions, on factual matters of practical importance, that most everyone around you sharply disagrees with? Opinions that those who you respect consider ignorant, naïve, imprudent, and well outside your sphere of expertise? Opinions that, nevertheless, you simply continue to hold, because you’ve learned that, unless and until someone shows you the light, you can no more will yourself to change what you think about the matter than change your blood type?

I try to have as few such opinions as possible. Having run Shtetl-Optimized for fifteen years, I’m acutely aware of the success rate of those autodidacts who think they’ve solved P versus NP or quantum gravity or whatever. It’s basically zero out of hundreds—and why wouldn’t it be?

And yet there’s one issue where I feel myself in the unhappy epistemic situation of those amateurs, spamming the professors in all-caps. So, OK, here it is:

I think that, in a well-run civilization, the first covid vaccines would’ve been tested and approved by around March or April 2020, while mass-manufacturing simultaneously ramped up with trillions of dollars’ investment. I think almost everyone on earth could have, and should have, already been vaccinated by now. I think a faster, “WWII-style” approach would’ve saved millions of lives, prevented economic destruction, and carried negligible risks compared to its benefits. I think this will be clear to future generations, who’ll write PhD theses exploring how it was possible that we invented multiple effective covid vaccines in mere days or weeks, but then simply sat on those vaccines for a year, ticking off boxes called “Phase I,” “Phase II,” etc. while civilization hung in the balance.

I’ve said similar things, on this blog and elsewhere, since the beginning of the pandemic, but part of me kept expecting events to teach me why I was wrong. Instead events—including the staggering cost of delay, the spectacular failures of institutional authorities to adapt to the scientific realities of covid, and the long-awaited finding that all the major vaccines safely work (some better than others), just like the experts predicted back in February—all this only made me more confident of my original, stupid and naïve position.

I’m saying all this—clearly enough that no one will misunderstand—but I’m also scared to say it. I’m scared because it sounds too much like colossal ingratitude, like Monday-morning quarterbacking of one of the great heroic achievements of our era by someone who played no part in it.

Let’s be clear: the ~11 months that it took to get from sequencing the novel coronavirus, to approving and mass-manufacturing vaccines, is a world record, soundly beating the previous record of 4 years. Nobel Prizes and billions of dollars are the least that those who made it happen deserve. Eternal praise is especially due to those like Katalin Karikó, who risked their careers in the decades before covid to do the basic research on mRNA delivery that made the development of these mRNA vaccines so blindingly fast.

Furthermore, I could easily believe that there’s no one agent—neither Pfizer nor BioNTech nor Moderna, neither the CDC nor FDA nor other health or regulatory agencies, neither Bill Gates nor Moncef Slaoui—who could’ve unilaterally sped things up very much. If one of them tried, they would’ve simply been ostracized by the other parts of the system, and they probably all understood that. It might have taken a whole different civilization, with different attitudes about utility and risk.

And yet the fact remains that, historic though it was, a one-to-two-year turnaround time wasn’t nearly good enough. Especially once we factor in the faster-spreading variant, by the time we’ve vaccinated everyone, we’ll already be a large fraction of the way to herd immunity and to the vaccine losing its purpose. For all the advances in civilization, from believing in demonic spirits all the way to understanding mRNA at a machine-code level of detail, covid is running wild much like it would have back in the Middle Ages—partly, yes, because modern transportation helps it spread, but partly also because our political and regulatory and public-health tools have lagged so breathtakingly behind our knowledge of molecular biology.

What could’ve been done faster? For starters, as I said back in March, we could’ve had human challenge trials with willing volunteers, of whom there were tens of thousands. We could’ve started mass-manufacturing months earlier, with funding commensurate with the problem’s scale (think trillions, not billions). Today, we could give as many people as possible the first doses (which apparently already provide something like ~80% protection) before circling back to give the second doses (which boost the protection as high as ~95%). We could distribute the vaccines that are now sitting in warehouses, spoiling, while people in the distribution chain take off for the holidays—but that’s such low-hanging fruit that it feels unsporting even to mention it.

Let me now respond to three counterarguments that would surely come up in the comments if I didn’t address them.

  1. The Argument from Actual Risk. Every time this subject arises, someone patiently explains to me that, since a vaccine gets administered to billions of healthy people, the standards for its safety and efficacy need to be even higher than they are for ordinary medicines. Of course that’s true, and it strikes me as an excellent reason not to inject people with a completely untested vaccine! All I ask is that the people who are, or could be, harmed by a faulty vaccine, be weighed on the same moral scale as the people harmed by covid itself. As an example, we know that the Phase III clinical trials were repeatedly halted for days or weeks because of a single participant developing strange symptoms—often a participant who’d received the placebo rather than the actual vaccine! That person matters. Any future vaccine recipient who might develop similar symptoms matters. But the 10,000 people who die of covid every single day we delay, along with the hundreds of millions more impoverished, kept out of school, etc., matter equally. If we threw them all onto the same utilitarian scale, would we be making the same tradeoffs that we are now? I feel like the question answers itself.
  2. The Argument from Perceived Risk. Even with all the testing that’s been done, somewhere between 16% and 40% of Americans (depending on which poll you believe) say that they’ll refuse to get a covid vaccine, often because of anti-vaxx conspiracy theories. How much higher would the percentage be had the vaccines been rushed out in a month or two? And of course, if not enough people get vaccinated, then R0 remains above 1 and the public-health campaign is a failure. In this way of thinking, we need three phases of clinical trials the same way we need everyone to take off their shoes at airport security: it might not prevent a single terrorist, but the masses will be too scared to get on the planes if we don’t. To me, this (if true) only underscores my broader point, that the year-long delay in getting vaccines out represents a failure of our entire civilization, rather than a failure of any one agent. But also: people’s membership in the pro- or anti-vaxx camps is not static. The percentage saying they’ll get a covid vaccine seems to have already gone up, as a formerly abstract question becomes a stark choice between wallowing in delusions and getting a deadly disease, or accepting reality and not getting it. So while the Phase III trials were still underway—when the vaccines were already known to be safe, and experts thought it much more likely than not that they’d work—would it have been such a disaster to let Pfizer and Moderna sell the vaccines, for a hefty profit, to those who wanted them? With the hope that, just like with the iPhone or any other successful consumer product, satisfied early adopters would inspire the more reticent to get in line too?
  3. The Argument from Trump. Now for the most awkward counterargument, which I’d like to address head-on rather than dodge. If the vaccines had been approved faster in the US, it would’ve looked to many like Trump deserved credit for it, and he might well have been reelected. And devastating though covid has been, Trump is plausibly worse! Here’s my response: Trump has the mentality of a toddler, albeit with curiosity swapped out for cruelty and vindictiveness. His and his cronies’ impulsivity, self-centeredness, and incompetence are likely responsible for at least ~200,000 of the 330,000 Americans now dead from covid. But, yes, reversing his previous anti-vaxx stance, Trump did say that he wanted to see a covid vaccine in months, just like I’ve said. Does it make me uncomfortable to have America’s worst president in my “camp”? Only a little, because I have no problem admitting that sometimes toddlers are right and experts are wrong. The solution, I’d say, is not to put toddlers in charge of the government! As should be obvious by now—indeed, as should’ve been obvious back in 2016—that solution has some exceedingly severe downsides. The solution, rather, is to work for a world where experts are unafraid to speak bluntly, so that it never falls to a mental toddler to say what the experts can’t say without jeopardizing their careers.

Anyway, despite everything I’ve written, considerations of Aumann’s Agreement Theorem still lead me to believe there’s an excellent chance that I’m wrong, and the vaccines couldn’t realistically have been rolled out any faster. The trouble is, I don’t understand why. And I don’t understand why compressing this process, from a year or two to at most a month or two, shouldn’t be civilization’s most urgent priority ahead of the next pandemic. So go ahead, explain it to me! I’ll be eternally grateful to whoever makes me retract this post in shame.

Update (Jan. 1, 2021): If you want a sense of the on-the-ground realities of administering the vaccine in the US, check out this long post by Zvi Mowshowitz. Briefly, it looks like in my post, I gave those in charge way too much benefit of the doubt (!!). The Trump administration pledged to administer 20 million vaccines by the end of 2020; instead it administered fewer than 3 million. Crucially, this is not because of any problem with manufacturing or supply, but just because of pure bureaucratic blank-facedness. Incredibly, even as the pandemic rages, most of the vaccines are sitting in storage, at severe risk of spoiling … and officials’ primary concern is not to administer the precious doses, but just to make sure no one gets a dose “out of turn.” In contrast to Israel, where they’re now administering vaccines 24/7, including on Shabbat, with the goal being to get through the entire population as quickly as possible, in the US they’re moving at a snail’s pace and took off for the holidays. In Wisconsin, a pharmacist intentionally spoiled hundreds of doses; in West Virginia, they mistakenly gave antibody treatments instead of vaccines. There are no longer any terms to understand what’s happening other than those of black comedy.

The case for moving to a red state

Tuesday, December 22nd, 2020

Update (Dec. 23): This post quickly attracted many of the most … colorful comments in this blog’s 15-year history. My moderation queue is overflowing right now with “gas the kikes,” “[f-word] [n-words],” “race war now,” “kikes deserve to burn in hell,” “a world without [n-words],” “the day of the rope approaches,” and countless similar contributions. One commenter focused on how hilarious he found my romantic difficulties earlier in life.

The puzzle, for me, is that I’d spent years denouncing Trump’s gleeful destruction of the country that I grew up believing in, using the strongest language I could muster. So why am I only now getting all the hate-spam?

Then a possible explanation hit me: namely, the sort of person who’d leave such comments is utterly impervious to moral condemnation. The only thing such a person cares about—indeed, as it turns out, feels a volcanic need to shout down—is someone articulating an actual plausible path to removing his resentment-fueled minority from power. If this is right, then I’m proud to have hit a nerve. –SA


  1. The US is now a failed democracy, with a president who’s considering declaring martial law to avoid conceding a lost election, and with the majority of his party eager to follow him arbitrarily far into the abyss. Even assuming, as I do, that the immediate putsch will fail, the Republic will not magically return to normal.
  2. The survival of Enlightenment values on Earth now depends, in large part, on the total electoral humiliation and defeat of the forces that enabled Trump—something that the last election failed to deliver.
  3. Alas, ever since it absorbed the Southern racists in the 1960s, the Republican Party has maintained a grip on power wholly out of proportion to its numbers through anti-democratic means. The most durable of these means are built into the Constitution itself: the Electoral College, the overrepresentation of sparsely-populated rural states in the Senate, and the gerrymandering of Congressional districts. Every effort to fix these anachronisms, whether by legislation or Constitutional amendment, has been blocked for generations. It’s fantasy to imagine the beneficiaries of these unjust advantages ever voluntarily giving them up.
  4. Accordingly, the survival of the nation might come down to whether enough Americans, in deep-blue areas like California and New York and Massachusetts, are willing to pick up and move to where their votes actually count.
  5. The pandemic has awoken tens of millions of people to the actual practical feasibility of working from home or in a different time zone from their employer. The culture has finally caught up to the abridgment of distance that the Internet, smartphones, and videoconferencing achieved well over a decade ago.
  6. Still, one doesn’t expect Brooklynites to settle by the thousands on remote mountaintops. And even if they did, there are many remote mountaintops, so the transplants’ power could be diluted to near nothing. Better for the transplants to concentrate themselves in a few Schelling points: ideally, cities where they could both swing the national electoral calculus and actually want to live.
  7. There’s been a spate of recent articles about the possible exodus of tech companies and professionals from the Bay Area, because of whatever combination of sky-high rents, NIMBYism, taxes, mismanagement, wildfires, blackouts, and the pandemic having removed the once-overwhelming reasons to be in the Bay. Oft-mentioned alternatives include Miami, Denver, and of course my own adopted hometown of Austin, TX, where Elon Musk and Oracle just announced they’re moving.
  8. If you were trying to optimize your environment for urban Blue-Tribeyness—indie music, craft beer, ironic tattoos, Bernie Sanders yard signs, etc. etc.—but subject to living in an important red or purple state, where your vote could plausibly contribute to a historic political realignment of the US—then you couldn’t do much better than Austin. Where else is in the running? Atlanta, Houston, San Antonio, Pittsburgh?
  9. It’s true that Texas is the state of Ken Paxton, the corrupt and unhinged Attorney General who unsuccessfully petitioned the US Supreme Court to overturn Trump’s election loss. But it’s also the state of MD Anderson, often considered the best oncology center on earth, and of Steven Weinberg, possibly the greatest living physicist. It’s where the spike proteins of both the Pfizer and Moderna covid vaccines were developed. It’s where Sheldon Cooper grew up—alright, he’s fictional, but I’ve worked with undergrads at UT Austin who almost could’ve been Sheldon. Like the US as a whole, the state has potential.
  10. Accelerating the mass migration of blue Americans to cities like Austin isn’t only good for the country and the world. The New Yorkers and San Franciscans left behind will thank the migrants for lower rents!
  11. But won’t climate change make Texas a living hell? Alas, as recent wildfires and hurricanes remind us, there aren’t many places on earth that climate change won’t soon make various shades of hell. At least Austin, like many red locales, is far inland. For the summers, there are lots of swimming pools and lakes.
  12. If Austin gets overrun by Silicon Valley refugees, won’t they recreate whatever dysfunctional conditions caused them to flee Silicon Valley in the first place? Maybe, eventually, but it would take quite a while. One problem at a time! And the “problems of Silicon Valley” are problems most places should desperately want.
  13. Is Texas winnable—or is a blue Texas like controlled nuclear fusion, forever a decade or two in the future? Well, Trump’s 6-point margin in Texas this November, 3 points less than his margin in 2016, amounted to 630,000 votes out of 11.3 million cast. Meanwhile, net migration to Texas over the past decade included 356,000 to Austin (growing its population by 20%), 687,000 to Dallas, 603,000 to Houston, 260,000 to San Antonio. Let’s say we want two million more transplants. The question is not whether they’re going to arrive but at what rate.
  14. Can the cities of Texas accommodate two million more people? Well, traffic will get worse, rents will get higher … but the answer is an unequivocal yes. Land, Texas has.
  15. Do the tech workers who I’d like to relocate even vote blue? Given the unremitting scorn that the woke press now heaps on “racist, sexist, greedy Silicon Valley techbros,” it can be easy to forget this, but the answer to the question is: yes, overwhelmingly, they do. Mountain View, CA, for example, went 83% Biden and only 15% Trump in November.
  16. Even if everything I’ve said is obvious, in order for the Great Red-State Tech-Worker Migration happen at the rate I want, it needs to become common knowledge that it’s happening—not merely known but known to be known, and so forth. Closely related, it needs to become a serious status symbol for any blue-triber to relocate to a contested state. (“You’re moving to Georgia to help save the Republic? And you’ll be able to afford a four-bedroom house? I’m so jealous!”)
  17. This has been the real purpose of this post: to make it clear that, if you help settle the wild frontier like my family did, then a tiny bit of the unattainable coolness of a stuttering quantum complexity theory blogger/professor could rub off on you.
  18. Think about it this way. Many of our grandparents gave their lives to save the world from fascism. Would you have done the same in their place? OK now, what if you didn’t have to lose your life: you only had to live in Austin or Miami?
  19. If this post plays a role in any like-minded reader’s decision to move to Austin, then once covid is over, they should tell me to redeem a personal welcome celebration from me and Dana. We’ll throw some extra brisket on the barbie.

Chinese BosonSampling experiment: the gloves are off

Wednesday, December 16th, 2020

Two weeks ago, I blogged about the striking claim, by the group headed by Chaoyang Lu and Jianwei Pan at USTC in China, to have achieved quantum supremacy via BosonSampling with 50-70 detected photons. I also did a four-part interview on the subject with Jonathan Tennenbaum at Asia Times, and other interviews elsewhere. None of that stopped some people, who I guess didn’t google, from writing to tell me how disappointed they were by my silence!

The reality, though, is that a lot has happened since the original announcement, so it’s way past time for an update.

I. The Quest to Spoof

Most importantly, other groups almost immediately went to work trying to refute the quantum supremacy claim, by finding some efficient classical algorithm to spoof the reported results. It’s important to understand that this is exactly how the process is supposed to work: as I’ve often stressed, a quantum supremacy claim is credible only if it’s open to the community to refute and if no one can. It’s also important to understand that, for reasons we’ll go into, there’s a decent chance that people will succeed in simulating the new experiment classically, although they haven’t yet. All parties to the discussion agree that the new experiment is, far and away, the closest any BosonSampling experiment has ever gotten to the quantum supremacy regime; the hard part is to figure out if it’s already there.

Part of me feels guilty that, as one of reviewers on the Science paper—albeit, one stressed and harried by kids and covid—it’s now clear that I didn’t exercise the amount of diligence that I could have, in searching for ways to kill the new supremacy claim. But another part of me feels that, with quantum supremacy claims, much like with proposals for new cryptographic codes, vetting can’t be the responsibility of one or two reviewers. Instead, provided the claim is serious—as this one obviously is—the only thing to do is to get the paper out, so that the entire community can then work to knock it down. Communication between authors and skeptics is also a hell of a lot faster when it doesn’t need to go through a journal’s editorial system.

Not surprisingly, one skeptic of the new quantum supremacy claim is Gil Kalai, who (despite Google’s result last year, which Gil still believes must be in error) rejects the entire possibility of quantum supremacy on quasi-metaphysical grounds. But other skeptics are current and former members of the Google team, including Sergio Boixo and John Martinis! And—pause to enjoy the irony—Gil has effectively teamed up with the Google folks on questioning the new claim. Another central figure in the vetting effort—one from whom I’ve learned much of what I know about the relevant issues over the last week—is Dutch quantum optics professor and frequent Shtetl-Optimized commenter Jelmer Renema.

Without further ado, why might the new experiment, impressive though it was, be efficiently simulable classically? A central reason for concern is photon loss: as Chaoyang Lu has now explicitly confirmed (it was implicit in the paper), up to ~70% of the photons get lost on their way through the beamsplitter network, leaving only ~30% to be detected. At least with “Fock state” BosonSampling—i.e., the original kind, the kind with single-photon inputs that Alex Arkhipov and I proposed in 2011—it seems likely to me that such a loss rate would be fatal for quantum supremacy; see for example this 2019 paper by Renema, Shchesnovich, and Garcia-Patron.

Incidentally, if anything’s become clear over the last two weeks, it’s that I, the co-inventor of BosonSampling, am no longer any sort of expert on the subject’s literature!

Anyway, one source of uncertainty regarding the photon loss issue is that, as I said in my last post, the USTC experiment implemented a 2016 variant of BosonSampling called Gaussian BosonSampling (GBS)—and Jelmer tells me that the computational complexity of GBS in the presence of losses hasn’t yet been analyzed in the relevant regime, though there’s been work aiming in that direction. A second source of uncertainty is simply that the classical simulations work in a certain limit—namely, fixing the rate of noise and then letting the numbers of photons and modes go to infinity—but any real experiment has a fixed number of photons and modes (in USTC’s case, they’re ~50 and ~100 respectively). It wouldn’t do to reject USTC’s claim via a theoretical asymptotic argument that would equally well apply to any non-error-corrected quantum supremacy demonstration!

OK, but if an efficient classical simulation of lossy GBS experiments exists, then what is it? How does it work? It turns out that we have a plausible candidate for the answer to that, originating with a 2014 paper by Gil Kalai and Guy Kindler. Given a beamsplitter network, Kalai and Kindler considered an infinite hierarchy of better and better approximations to the BosonSampling distribution for that network. Roughly speaking, at the first level (k=1), one pretends that the photons are just classical distinguishable particles. At the second level (k=2), one correctly models quantum interference involving pairs of photons, but none of the higher-order interference. At the third level (k=3), one correctly models three-photon interference, and so on until k=n (where n is the total number of photons), when one has reproduced the original BosonSampling distribution. At least when k is small, the time needed to spoof outputs at the kth level of the hierarchy should grow like nk. As theoretical computer scientists, Kalai and Kindler didn’t care whether their hierarchy produced any physically realistic kind of noise, but later work, by Shchesnovich, Renema, and others, showed that (as it happens) it does.

In its original paper, the USTC team ruled out the possibility that the first, k=1 level of this hierarchy could explain its experimental results. More recently, in response to inquiries by Sergio, Gil, Jelmer, and others, Chaoyang tells me they’ve ruled out the possibility that the k=2 level can explain their results either. We’re now eagerly awaiting the answer for larger values of k.

Let me add that I owe Gil Kalai the following public mea culpa. While his objections to QC have often struck me as unmotivated and weird, in the case at hand, Gil’s 2014 work with Kindler is clearly helping drive the scientific discussion forward. In other words, at least with BosonSampling, it turns out that Gil put his finger precisely on a key issue. He did exactly what every QC skeptic should do, and what I’ve always implored the skeptics to do.

II. BosonSampling vs. Random Circuit Sampling: A Tale of HOG and CHOG and LXEB

There’s a broader question: why should skeptics of a BosonSampling experiment even have to think about messy details like the rate of photon losses? Why shouldn’t that be solely the experimenters’ job?

To understand what I mean, consider the situation with Random Circuit Sampling, the task Google demonstrated last year with 53 qubits. There, the Google team simply collected the output samples and fed them into a benchmark that they called “Linear Cross-Entropy” (LXEB), closely related to what Lijie Chen and I called “Heavy Output Generation” (HOG) in a 2017 paper. With suitable normalization, an ideal quantum computer would achieve an LXEB score of 2, while classical random guessing would achieve an LXEB score of 1. Crucially, according to a 2019 result by me and Sam Gunn, under a plausible (albeit strong) complexity assumption, no subexponential-time classical spoofing algorithm should be able to achieve an LXEB score that’s even slightly higher than 1. In its experiment, Google reported an LXEB score of about 1.002, with a confidence interval much smaller than 0.002. Hence: quantum supremacy (subject to our computational assumption), with no further need to know anything about the sources of noise in Google’s chip! (More explicitly, Boixo, Smelyansky, and Neven did a calculation in 2017 to show that the Kalai-Kindler type of spoofing strategy definitely isn’t going to work against RCS and Linear XEB, with no computational assumption needed.)

So then why couldn’t the USTC team do something analogous with BosonSampling? Well, they tried to. They defined a measure that they called “HOG,” although it’s different from my and Lijie Chen’s HOG, more similar to a cross-entropy. Following Jelmer, let me call their measure CHOG, where the C could stand for Chinese, Chaoyang’s, or Changed. They calculated the CHOG for their experimental samples, and showed that it exceeds the CHOG that you’d get from the k=1 and k=2 levels of the Kalai-Kindler hierarchy, as well as from various other spoofing strategies, thereby ruling those out as classical explanations for their results.

The trouble is this: unlike with Random Circuit Sampling and LXEB, with BosonSampling and CHOG, we know that there are fast classical algorithms that achieve better scores than the trivial algorithm, the algorithm that just picks samples at random. That follows from Kalai and Kindler’s work, and it even more simply follows from a 2013 paper by me and Arkhipov, entitled “BosonSampling Is Far From Uniform.” Worse yet, with BosonSampling, we currently have no analogue of my 2019 result with Sam Gunn: that is, a result that would tell us (under suitable complexity assumptions) the highest possible CHOG score that we expect any efficient classical algorithm to be able to get. And since we don’t know exactly where that ceiling is, we can’t tell the experimentalists exactly what target they need to surpass in order to claim quantum supremacy. Absent such definitive guidance from us, the experimentalists are left playing whac-a-mole against this possible classical spoofing strategy, and that one, and that one.

This is an issue that I and others were aware of for years, although the new experiment has certainly underscored it. Had I understood just how serious the USTC group was about scaling up BosonSampling, and fast, I might’ve given the issue some more attention!

III. Fock vs. Gaussian BosonSampling

Above, I mentioned another complication in understanding the USTC experiment: namely, their reliance on Gaussian BosonSampling (GBS) rather than Fock BosonSampling (FBS), sometimes also called Aaronson-Arkhipov BosonSampling (AABS). Since I gave this issue short shrift in my previous post, let me make up for it now.

In FBS, the initial state consists of either 0 or 1 photons in each input mode, like so: |1,…,1,0,…,0⟩. We then pass the photons through our beamsplitter network, and measure the number of photons in each output mode. The result is that the amplitude of each possible output configuration can be expressed as the permanent of some n×n matrix, where n is the total number of photons. It was interest in the permanent, which plays a central role in classical computational complexity, that led me and Arkhipov to study BosonSampling in the first place.

The trouble is, preparing initial states like |1,…,1,0,…,0⟩ turns out to be really hard. No one has yet build a source that reliably outputs one and only one photon at exactly a specified time. This led two experimental groups to propose an idea that, in a 2013 post on this blog, I named Scattershot BosonSampling (SBS). In SBS, you get to use the more readily available “Spontaneous Parametric Down-Conversion” (SPDC) photon sources, which output superpositions over different numbers of photons, of the form $$\sum_{n=0}^{\infty} \alpha_n |n \rangle |n \rangle, $$ where αn decreases exponentially with n. You then measure the left half of each entangled pair, hope to see exactly one photon, and are guaranteed that if you do, then there’s also exactly one photon in the right half. Crucially, one can show that, if Fock BosonSampling is hard to simulate approximately using a classical computer, then the Scattershot kind must be as well.

OK, so what’s Gaussian BosonSampling? It’s simply the generalization of SBS where, instead of SPDC states, our input can be an arbitrary “Gaussian state”: for those in the know, a state that’s exponential in some quadratic polynomial in the creation operators. If there are m modes, then such a state requires ~m2 independent parameters to specify. The quantum optics people have a much easier time creating these Gaussian states than they do creating single-photon Fock states.

While the amplitudes in FBS are given by permanents of matrices (and thus, the probabilities by the absolute squares of permanents), the probabilities in GBS are given by a more complicated matrix function called the Hafnian. Roughly speaking, while the permanent counts the number of perfect matchings in a bipartite graph, the Hafnian counts the number of perfect matchings in an arbitrary graph. The permanent and the Hafnian are both #P-complete. In the USTC paper, they talk about yet another matrix function called the “Torontonian,” which was invented two years ago. I gather that the Torontonian is just the modification of the Hafnian for the situation where you only have “threshold detectors” (which decide whether one or more photons are present in a given mode), rather than “number-resolving detectors” (which count how many photons are present).

If Gaussian BosonSampling includes Scattershot BosonSampling as a special case, and if Scattershot BosonSampling is at least as hard to simulate classically as the original BosonSampling, then you might hope that GBS would also be at least as hard to simulate classically as the original BosonSampling. Alas, this doesn’t follow. Why not? Because for all we know, a random GBS instance might be a lot easier than a random SBS instance. Just because permanents can be expressed using Hafnians, doesn’t mean that a random Hafnian is as hard as a random permanent.

Nevertheless, I think it’s very likely that the sort of analysis Arkhipov and I did back in 2011 could be mirrored in the Gaussian case. I.e., instead of starting with reasonable assumptions about the distribution and hardness of random permanents, and then concluding the classical hardness of approximate BosonSampling, one would start with reasonable assumptions about the distribution and hardness of random Hafnians (or “Torontonians”), and conclude the classical hardness of approximate GBS. But this is theoretical work that remains to be done!

IV. Application to Molecular Vibronic Spectra?

In 2014, Alan Aspuru-Guzik and collaborators put out a paper that made an amazing claim: namely that, contrary to what I and others had said, BosonSampling was not an intrinsically useless model of computation, good only for refuting QC skeptics like Gil Kalai! Instead, they said, a BosonSampling device (specifically, what would later be called a GBS device) could be directly applied to solve a practical problem in quantum chemistry. This is the computation of “molecular vibronic spectra,” also known as “Franck-Condon profiles,” whatever those are.

I never understood nearly enough about chemistry to evaluate this striking proposal, but I was always a bit skeptical of it, for the following reason. Nothing in the proposal seemed to take seriously that BosonSampling is a sampling task! A chemist would typically have some specific numbers that she wants to estimate, of which these “vibronic spectra” seemed to be an example. But while it’s often convenient to estimate physical quantities via Monte Carlo sampling over simulated observations of the physical system you care about, that’s not the only way to estimate physical quantities! And worryingly, in all the other examples we’d seen where BosonSampling could be used to estimate a number, the same number could also be estimated using one of several polynomial-time classical algorithms invented by Leonid Gurvits. So why should vibronic spectra be an exception?

After an email exchange with Alex Arkhipov, Juan Miguel Arrazola, Leonardo Novo, and Raul Garcia-Patron, I believe we finally got to the bottom of it, and the answer is: vibronic spectra are not an exception.

In terms of BosonSampling, the vibronic spectra task is simply to estimate the probability histogram of some weighted sum like $$ w_1 s_1 + \cdots + w_ m s_m, $$ where w1,…,wm are fixed real numbers, and (s1,…,sm) is a possible outcome of the BosonSampling experiment, si representing the number of photons observed in mode i. Alas, while it takes some work, it turns out that Gurvits’s classical algorithms can be adapted to estimate these histograms. Granted, running the actual BosonSampling experiment would provide slightly more detailed information—namely, some exact sampled values of $$ w_1 s_1 + \cdots + w_ m s_m, $$ rather than merely additive approximations to the values—but since we’d still need to sort those sampled values into coarse “bins” in order to compute a histogram, it’s not clear why that additional precision would ever be of chemical interest.

This is a pity, since if the vibronic spectra application had beaten what was doable classically, then it would’ve provided not merely a first practical use for BosonSampling, but also a lovely way to verify that a BosonSampling device was working as intended.

V. Application to Finding Dense Subgraphs?

A different potential application of Gaussian BosonSampling, first suggested by the Toronto-based startup Xanadu, is finding dense subgraphs in a graph. (Or at least, providing an initial seed to classical optimization methods that search for dense subgraphs.)

This is an NP-hard problem, so to say that I was skeptical of the proposal would be a gross understatement. Nevertheless, it turns out that there is a striking observation by the Xanadu team at the core of their proposal: namely that, given a graph G and a positive even integer k, a GBS device can be used to sample a random subgraph of G of size k, with probability proportional to the square of the number of perfect matchings in that subgraph. Cool, right? And potentially even useful, especially if the number of perfect matchings could serve as a rough indicator of the subgraph’s density! Alas, Xanadu’s Juan Miguel Arrazola himself recently told me that there’s a cubic-time classical algorithm for the same sampling task, so that the possible quantum speedup that one could get from GBS in this way is at most polynomial. The search for a useful application of BosonSampling continues!


And that’s all for now! I’m grateful to all the colleagues I talked to over the last couple weeks, including Alex Arkhipov, Juan Miguel Arrazola, Sergio Boixo, Raul Garcia-Patron, Leonid Gurvits, Gil Kalai, Chaoyang Lu, John Martinis, and Jelmer Renema, while obviously taking sole responsibility for any errors in the above. I look forward to a spirited discussion in the comments, and of course I’ll post updates as I learn more!

Beth Harmon and the Inner World of Squares

Monday, December 14th, 2020

The other day Dana and I finished watching The Queen’s Gambit, Netflix’s fictional saga of an orphaned girl in the 1960’s, Beth Harmon, who breaks into competitive chess and destroys one opponent after the next in her quest to become the world champion, while confronting her inner demons and addictions.

The show is every bit as astoundingly good as everyone says it is, and I might be able to articulate why. It’s because, perhaps surprisingly given the description, this is a story where chess actually matters—and indeed, the fact that chess matters so deeply to Beth and most of the other characters is central to the narrative.  (As in two pivotal scenes where Beth has sex with a male player, and then either she or he goes right back to working on chess.)

I’ve watched a lot of TV shows and movies, supposedly about scientists, where the science was just an interchangeable backdrop to what the writers clearly regarded as a more important story.  (As one random example, the drama NUMB3RS, supposedly about an FBI mathematician, where “math” could’ve been swapped out for “mystical crime-fighting intuition” with barely any change.)

It’s true that a fictional work about scientists shouldn’t try to be a science documentary, just like Queen’s Gambit doesn’t try to be a chess documentary.  But if you’re telling a story about characters who are obsessed with topic X, then you need to make their obsession plausible, make the entire story hinge on it, and even make the audience vicariously feel the same obsession.

This is precisely what Queen’s Gambit does for chess.  It’s a chess drama where the characters are constantly talking about chess, thinking about chess, and playing chess—and that actually succeeds in making that riveting.  (Even if most of the audience can’t follow what’s happening on the board, it turns out that it doesn’t matter, since you can simply convey the drama through the characters’ faces and the reactions of those around them.)

Granted, a few aspects of competitive chess in the series stood out as jarringly unrealistic even to a novice like me: for example, the almost complete lack of draws.  But as for the board positions—well, apparently Kasparov was a consultant, and he helped meticulously design each one to reflect the characters’ skill levels and what was happening in the plot.

While the premise sounds like a feminist wish-fulfillment fantasy—orphan girl faces hundreds of intimidating white men in the sexist 1960s, orphan girl beats them all at their own game with style and aplomb—this is not at all a MeToo story, or a story about male crudity or predation.  It’s after bigger fish than that.  The series, you might say, conforms to all the external requirements of modern woke ideology, yet the actual plot subverts the tenets of that ideology, or maybe just ignores them, in its pursuit of more timeless themes.

At least once Beth Harmon enters the professional chess world, the central challenges she needs to overcome are internal and mental—just like they’re supposed to be in chess.  It’s not the Man or the Patriarchy or any other external power (besides, of course, skilled opponents) holding her down.  Again and again, the top male players are portrayed not as sexist brutes but as gracious, deferential, and even awestruck by Beth’s genius after she’s humiliated them on the chessboard.  And much of the story is about how those vanquished opponents then turn around and try to help Beth, and about how she needs to learn to accept their help in order to evolve as a player and a human being.

There’s also that, after defeating male player after male player, Beth sleeps with them, or at least wants to.  I confess that, as a teenager, I would’ve found that unlikely and astonishing.  I would’ve said: obviously, the only guys who’d even have a chance to prove themselves worthy of the affection of such a brilliant and unique woman would be those who could beat her at chess.  Anyone else would just be dirt between her toes.  In the series, though, each male player propositions Beth only after she’s soundly annihilated him.  And she’s never once shown refusing.

Obviously, I’m no Beth Harmon; I’ll never be close in my field to what she is in hers.  Equally obviously, I grew up in a loving family, not an orphanage.  Still, I was what some people would call a “child prodigy,” what with the finishing my PhD at 22 and whatnot, so naturally that colored my reaction to the show.

There’s a pattern that goes like this: you’re obsessively interested, from your first childhood exposure, in something that most people aren’t.  Once you learn what the something is, it’s evident to you that your life’s vocation couldn’t possibly be anything else, unless some external force prevents you.  Alas, in order to pursue the something, you first need to get past bullies and bureaucrats, who dismiss you as a nobody, put barriers in your way, despise whatever you represent to them.  After a few years, though, the bullies can no longer stop you: you’re finally among peers or superiors in your chosen field, regularly chatting with them on college campuses or at conferences in swanky hotels, and the main limiting factor is just the one between your ears. 

You feel intense rivalries with your new colleagues, of course, you desperately want to excel them, but the fact that they’re all on the same obsessive quest as you means you can never actually hate them, as you did the bureaucrats or the bullies.  There’s too much of you in your competitors, and of them in you.

As you pursue your calling, you feel yourself torn in the following way.  On the one hand, you feel close to a moral obligation to humanity not to throw away whatever “gift” you were “given” (what loaded terms), to take the calling as far as it will go.  On the other hand, you also want the same things other people want, like friendship, validation, and of course sex.

In such a case, two paths naturally beckon.  The first is that of asceticism: making a virtue of eschewing all temporal attachments, romance or even friendship, in order to devote yourself entirely to the calling.  The second is that of renouncing the calling, pretending it never existed, in order to fit in and have a normal life.  Your fundamental challenge is to figure out a third path, to plug yourself into a community where the relentless pursuit of the unusual vocation and the friendship and the sex can all complement each other rather than being at odds.

It would be an understatement to say that I have some familiarity with this narrative arc.

I’m aware, of course, of the irony, that I can identify with so many contours of Beth Harmon’s journey—I, Scott Aaronson, who half the Internet denounced six years ago as a misogynist monster who denies the personhood and interiority of women.  In that life-alteringly cruel slur, there was a microscopic grain of truth, and it’s this: I’m not talented at imagining myself into the situations of people different from me.  It’s never been my strong suit.  I might like and admire people different from me, I might sympathize with their struggles and wish them every happiness, but I still don’t know what they’re thinking until they tell me.  And even then, I don’t fully understand it.

As one small but illustrative example, I have no intuitive understanding—zero—of what it’s like to be romantically attracted to men, or what any man could do or say or look like that could possibly be attractive to women.  If you have such an understanding, then imagine yourself sighted and me blind.  Intellectually, I might know that confidence or height or deep brown eyes or brooding artistry are supposed to be attractive in human males, but only because I’m told.  As far as my intuition is concerned, pretty much all men are equally hairy, smelly, and gross, a large fraction of women are alluring and beautiful and angelic, and both of those are just objective features of reality that no one could possibly see otherwise.

Thus, whenever I read or watch fiction starring a female protagonist who dates men, it’s very easy for me to imagine that protagonist judging me, enumerating my faults, and rejecting me, and very hard for me to do what I’m supposed to do, which is to put myself into her shoes.  I could watch a thousand female protagonists kiss a thousand guys onscreen, or wake up in bed next to them, and the thousandth-and-first time I’d still be equally mystified about what she saw in such a sweaty oaf and why she didn’t run from him screaming, and I’d snap out of vicariously identifying with her.  (Understanding gay men of course presents similar difficulties; understanding lesbians is comparatively easy.)

It’s possible to overcome this, but it takes an extraordinary female protagonist, brought to life by an extraordinary writer.  Off the top of my head, I can think of only a few.  There were Renee Feuer and Eva Mueller, the cerebral protagonists of Rebecca Newberger Goldstein’s The Mind-Body Problem and The Late Summer Passion of a Woman of Mind.  Maybe Ellie Arroway from Carl Sagan’s Contact.  And then there’s Beth Harmon.  With characters like these, I can briefly enter a space where their crushes on men seem no weirder or more inexplicable to me than my own teenage crushes … just, you know, inverted.  Sex is in any case secondary to the character’s primary urge to discover timeless truths, an urge that I fully understand because I’ve shared it.

Granted, the timeless truths of chess, an arbitrary and invented game, are less profound than those of quantum gravity or the P vs. NP problem, but the psychology is much the same, and The Queen’s Gambit does a good job of showing that.  To understand the characters of this series is to understand why they could be happier to lose an interesting game than to win a boring one.  And I could appreciate that, even if I was by no means the strongest player at my elementary school’s chess club, and the handicap with which I can beat my 7-year-old daughter is steadily decreasing.

Happy Chanukah / Vaccine Approval Day!

Friday, December 11th, 2020
  1. Inspired by my survey article, John Pavlus has now published an article on Busy Beaver for Quanta magazine.
  2. This week, I flitted back and forth between two virtual conferences: the Institute for Advanced Study’s Online Workshop on Qubits and Black Holes (which I co-organized with Juan Maldacena and Mark Van Raamsdonk), and Q2B (Quantum 2 Business) 2020, organized by QC Ware, for which I did my now-annual Ask-Me-Anything session. It was an interesting experience, switching between Euclidean path integrals and replica wormholes that I barely understood, and corporate pitches for near-term quantum computing that I … well, did understand! Anyway, happy to discuss either conference in the comments.
  3. For anyone interested in the new Chinese quantum supremacy claim based on Gaussian BosonSampling—the story has developing rapidly all week, with multiple groups trying to understand the classical difficulty of simulating the experiment. I’ll plan to write a followup post soon!
  4. The Complexity Zoo has now officially moved from the University of Waterloo to complexityzoo.net, hosted by the LessWrong folks! Thanks so much to Oliver Habryka for setting this up. Update (Dec. 12): Alas, complexityzoo.com no longer works if you use https. I don’t know how to fix it—the Bluehost control panel provides no options—and I’m not at a point in life where I can deal again with Bluehost SSL certificate hell. (How does everyone else deal with this shit? That’s the one part I don’t understand.) So, for now, you’ll need to update your bookmarks to complexityzoo.net.
  5. In return for his help with Zoo, Oliver asked me to help publicize a handsome $29 five-book set, “A Map that Reflects the Territory,” containing a selection of the best essays from LessWrong, including multiple essays by the much-missed Scott Alexander, and an essay on common knowledge inspired by my own Common Knowledge and Aumann’s Agreement Theorem. (See also the FAQ.) If you know any LW fans, I can think of few better gifts to go under their Christmas tree or secular rationalist equivalent.

Shor’s algorithm in higher dimensions: Guest post by Greg Kuperberg

Monday, December 7th, 2020

Upbeat advertisement: If research in QC theory or CS theory otherwise is your thing, then wouldn’t you like to live in peaceful, quiet, bicycle-based Davis, California, and be a faculty member at the large, prestigious, friendly university known as UC Davis? In the QCQI sphere, you’d have Marina RadulaskiBruno NachtergaeleMartin FraasMukund RangamaniVeronika Hubeny, and Nick Curro as faculty colleagues, among others; and yours truly, and hopefully more people in the future. This year the UC Davis CS department has a faculty opening in quantum computing, and another faculty opening in CS theory including quantum computing. If you are interested, then time is of the essence, since the full-consideration deadline is December 15.


In this guest post, I will toot my own horn about a paper in progress (hopefully nearly finished) that goes back to the revolutionary early days of quantum computing, namely Shor’s algorithm. The takeway: I think that the strongest multidimensional generalization of Shor’s algorithm has been missed for decades. It appears to be a new algorithm that does more than the standard generalization described by Kitaev. (Scott wanted me to channel Captain Kirk and boldly go with a takeaway, so I did.)

Unlike Shor’s algorithm proper, I don’t know of any dramatic applications of this new algorithm. However, more than one quantum algorithm was discovered just because it looked interesting, and then found applications later. The input to Shor’s algorithm is a function \(f:\mathbb{Z} \to S\), in other words a symbol-valued function \(f\) on the integers, which is periodic with an unknown period \(p\) and otherwise injective. In equations, \(f(x) = f(y)\) if only if \(p\) divides \(x-y\). In saying that the input is a function \(f\), I mean that Shor’s algorithm is provided with an algorithm to compute \(f\) efficiently. Shor’s algorithm itself can then find the period \(p\) in (quantum) polynomial time in the number of digits of \(p\). (Not polynomial time in \(p\), polynomial time in its logarithm.) If you’ve heard that Shor’s algorithm can factor integers, that is just one special case where \(f(x) = a^x\) mod \(N\), the integer to factor. In its generalized form, Shor’s algorithm is miraculous. In particular, if \(f\) is a black-box function, then it is routine to prove that any classical algorithm to do the same thing needs exponentially many values of \(f\), or values \(f(x)\) where \(x\) has exponentially many digits.

Shor’s algorithm begat the Shor-Kitaev algorithm, which does the same thing for a higher dimensional periodic function \(f:\mathbb{Z}^d \to S\), where \(f\) is now periodic with respect to a lattice \(L\). The Shor-Kitaev algorithm in turn begat the hidden subgroup problem (called HSP among friends), where \(\mathbb{Z}\) or \(\mathbb{Z}^d\) is replaced by a group \(G\), and now \(f\) is \(L\)-periodic for some subgroup \(L\). HSP varies substantially in both its computationally difficulty and its complexity status, depending on the structure of \(G\) as well as optional restrictions on \(L\).

A funny thing happened on the way to the forum in later work on HSP. Most of the later work has been in the special case that the ambient group \(G\) is finite, even though \(G\) is infinite in the famous case of Shor’s algorithm. My paper-to-be explores the hidden subgroup problem in various cases when \(G\) is infinite. In particular, I noticed that even the case \(G = \mathbb{Z}^d\) isn’t fully solved, because the Shor-Kitaev algorithm makes the extra assumption that \(L\) is a maximum-rank lattice, or equivalently that \(L\) a finite-index subgroup of \(\mathbb{Z}^d\). As far as I know, the more general case where \(L\) might have lower rank wasn’t treated previously. I found an extension of Shor-Kitaev to handle this case, which is I will sketch after discussing some points about HSP in general.

Quantum algorithms for HSP

Every known quantum algorithm for HSP has the same two opening steps. First prepare an equal superposition \(|\psi_G\rangle\) of “all” elements of the ambient group \(G\), then apply a unitary form of the hiding function \(f\) to get the following: \[ U_f|\psi_G\rangle \propto \sum_{x \in G} |x,f(x)\rangle. \] Actually, you can only do exactly this when \(G\) is a finite group. You cannot make an equal quantum superposition on an infinite set, for the same reason that you cannot choose an integer uniformly at random from among all of the integers: It would defy the laws of probability. Since computers are finite, a realistic quantum algorithm cannot make an unequal quantum superposition on an infinite set either. However, if \(G\) is a well-behaved infinite group, then you can approximate the same idea by making an equal superposition on a large but finite box \(B \subseteq G\) instead: \[ U_f|\psi_G\rangle \propto \sum_{x \in B \subseteq G} |x,f(x)\rangle. \] Quantum algorithms for HSP now follow a third counterintuitive “step”, namely, that you should discard the output qubits that contain the value \(f(x)\). You should take the values of \(f\) to be incomprehensible data, encrypted for all you know. A good quantum algorithm evaluates \(f\) too few times to interpret its output, so you might as well let it go. (By contrast, a classical algorithm is forced to dig for the only meaningful information that the output of \(f\) to have. Namely, it has to keep searching until it finds equal values.) What remains, want what turns out to be highly valuable, is the input state in a partially measured form. I remember joking with Cris Moore about the different ways of looking at this step:

  1. You can measure the output qubits.
  2. The janitor can fish the output qubits out of the trash and measure them for you.
  3. You can secretly not measure the output qubits and say you did.
  4. You can keep the output qubits and say you threw them away.

Measuring the output qubits wins you the purely mathematical convenience that the posterior state on the input qubits is pure (a vector state) rather than mixed (a density matrix). However, since no use is made of the measured value, it truly makes no difference for the algorithm.

The final universal step for all HSP quantum algorithms is to apply a quantum Fourier transform (or QFT) to the input register and measure the resulting Fourier mode. This might seem like a creative step that may or may not be a good idea. However, if you have an efficient algorithm for the QFT for your particular group \(G\), then you might as well do this, because (taking the interpretation that you threw away the output register) the environment already knows the Fourier mode. You can assume that this Fourier mode has been published in the New York Times, and you won’t lose anything by reading the papers.

Fourier modes and Fourier stripes

I’ll now let \(G = \mathbb{Z}^d\) and make things more explicit, for starters by putting arrows on elements \(\vec{x} \in \mathbb{Z}^d\) to indicate that they are lattice vectors. The standard begining produces a superposition \(|\psi_{L+\vec{v}}\rangle\) on a translate \(L+\vec{v}\) of the hidden lattice \(L\). (Again, \(L\) is the periodicity of \(f\).) If this state could be an equal superposition on the infinite set \(L+\vec{v}\), and if you could do a perfect QFT on the infinite group \(\mathbb{Z}^d\), then the resulting Fourier mode would be a randomly chosen element of a certain dual group \(L^\# \subseteq (\mathbb{R}/\mathbb{Z})^d\) inside the torus of Fourier modes of \(\mathbb{Z}^d\). Namely, \(L^\#\) consists of those vectors \(\vec{y} \in (\mathbb{R}/\mathbb{Z})^d\) whose such that the dot product \(\vec{x} \cdot \vec{y}\) is an integer for every \(\vec{x} \in L\). (If you expected the Fourier dual of the integers \(\mathbb{Z}\) to be a circle \(\mathbb{R}/2\pi\mathbb{Z}\) of length \(2\pi\), I found it convenient here to rescale it to a circle \(\mathbb{R}/\mathbb{Z}\) of length 1. This is often considered gauche these days, like using \(h\) instead of \(\hbar\) in quantum mechanics, but in context it’s okay.) In principle, you can learn \(L^\#\) from sampling it, and then learn \(L\) from \(L^\#\). Happily, the unknown and irrelevant translation vector \(\vec{v}\) is erased in this method.

In practice, it’s not so simple. As before, you cannot actually make an equal superposition on all of \(L+\vec{v}\), but only trimmed to a box \(B \subseteq \mathbb{Z}^d\). If you have \(q\) qubits available for each coordinate of \(\mathbb{Z}^d\), then \(B\) might be a \(d\)-dimensional cube with \(Q = 2^q\) lattice points in each direction. Following Peter Shor’s famous paper, the standard thing to do here is to identify \(B\) with the finite group \((\mathbb{Z}/Q)^d\) and do the QFT there instead. This is gauche as pure mathematics, but it’s reasonable as computer science. In any case, it works, but it comes at a price. You should rescale the resulting Fourier mode \(\vec{y} \in (\mathbb{Z}/Q)^d\) as \(\vec{y}_1 = \vec{y}/Q\) to match it to the torus \((\mathbb{R}/\mathbb{Z})^d\). Even if you do that, \(\vec{y}_1\) is not actually a uniformly random element of \(L^\#\), but rather a noisy, discretized approximation of one.

In Shor’s algorithm, the remaining work is often interpreted as the post-climax. In this case \(L = p\mathbb{Z}\), where \(p\) is the hidden period of \(f\), and \(L^\#\) consists of the multiples of \(1/p\) in \(\mathbb{R}/\mathbb{Z}\). The Fourier mode \(y_1\) (skipping the arrow since we are in one dimension) is an approximation to some fraction \(r/p\) with roughly \(q\) binary digits of precision. (\(y_1\) is often but not always the very best binary approximation to \(r/p\) with the available precision.) If you have enough precision, you can learn a fraction from its digits, either in base 2 or in any base. For instance, if I’m thinking of a fraction that is approximately 0.2857, then 2/7 is much closer than any other fraction with a one-digit denominator. As many people know, and as Shor explained in his paper, continued fractions are an efficient and optimal algorithm for this in larger cases.

The Shor-Kitaev algorithm works the same way. You can denoise each coordinate of each Fourier example \(\vec{y}_1\) with the continued fraction algorithm to obtain an exact element \(\vec{y}_0 \in L^\#\). You can learn \(L^\#\) with a polynomial number of samples, and then learn \(L\) from that with integer linear algebra. However, this approach can only work if \(L^\#\) is a finite group, or equivalently when \(L\) has maximum rank \(d\). This condition is explicitly stated in Kitaev’s paper, and in most but not all of the papers and books that cite this algorithm. if \(L\) has maximum rank, then the picture in Fourier space looks like this: 

However, if \(L\) has rank \(\ell < d\), then \(L^\#\) is a pattern of \((k-\ell)\)-dimensional stripes, like this instead: 

In this case, as the picture indicates, each coordinate of \(\vec{y}_1\) is flat random and individually irreparable. If you knew the direction of the stripes, then you use could define a slanted coordinate system where some of the coordinates of \(\vec{y}_1\) could be repaired. But the tangent directions of \(L^\#\) essentially beg the question. They are the orthogonal space of \(L_\mathbb{R}\), the vector space subtended by the hidden subgroup \(L\). If you know \(L_\mathbb{R}\), then you can find \(L\) by running Shor-Kitaev in the lattice \(L_\mathbb{R} \cap \mathbb{Z}^d\).

My solution to this conundrum is to observe that the multiples of a randomly chosen point \(\vec{y}_0\) in \(L^\#\) have a good chance of filling out \(L^\#\) adequately well, in particular to land near \(\vec{0}\) often enough to reveal the tangent directions of \(L^\#\). You have to make do with a noisy sample \(\vec{y}_1\) instead, but by making the QFT radix \(Q\) large enough, you can reduce the noise well enough for this to work. Still, even if you know that these small, high-quality multiples of \(\vec{y}_1\) exist, they are needles in an exponential haystack of bad multiples, so how do you find them? It turns out that the versatile LLL algorithm, which finds a basis of short vectors in a lattice, can be used here. The multiples of \(\vec{y}_0\) (say, for simplicity) aren’t a lattice, they are a dense orbit in \(L^\#\) or part of it. However, they are a shadow of a lattice one dimension higher, that you can supply to the LLL algorithm. This step produces lets you compute the linear span \(L_\mathbb{R}\) of \(L\) from its perpendicular space, and then as mentioned you can use Shor-Kitaev to learn the exact geometry of \(L\).

Quantum supremacy, now with BosonSampling

Thursday, December 3rd, 2020

Update (12/5): The Google team, along with Gil Kalai, have raised questions about whether the results of the new BosonSampling experiment might be easier to spoof classically than the USTC team thought they were, because of a crucial difference between BosonSampling and qubit-based random circuit sampling. Namely, with random circuit sampling, the marginal distribution over any k output qubits (for small k) is exponentially close to the uniform distribution. With BosonSampling, by contrast, the marginal distribution over k output modes is distinguishable from uniform, as Arkhipov and I noted in a 2013 followup paper. On the one hand, these easily-detected nonuniformities provide a quick, useful sanity check for whether BosonSampling is being done correctly. On the other hand, they might also give classical spoofing algorithms more of a toehold. The question is whether, by spoofing the k-mode marginals, a classical algorithm could also achieve scores on the relevant “HOG” (Heavy Output Generation) benchmark that are comparable to what the USTC team reported.

One way or the other, this question should be resolvable by looking at the data that’s already been collected, and we’re trying now to get to the bottom of it. And having failed to flag this potential issue when I reviewed the paper, I felt a moral obligation at least to let my readers know about it as soon as I did. If nothing else, this is an answer to those who claim this stuff is all obvious. Please pardon the science underway!


A group led by Jianwei Pan and Chao-Yang Lu, based mainly at USTC in Hefei, China, announced today that it achieved BosonSampling with 40-70 detected photons—up to and beyond the limit where a classical supercomputer could feasibly verify the results. (Technically, they achieved a variant called Gaussian BosonSampling: a generalization of what I called Scattershot BosonSampling in a 2013 post on this blog.)

For more, see also Emily Conover’s piece in Science News, or Daniel Garisto’s in Scientific American, both of which I consulted on. (Full disclosure: I was one of the reviewers for the Pan group’s Science paper, and will be writing the Perspective article to accompany it.)

The new result follows the announcement of 14-photon BosonSampling by the same group a year ago. It represents the second time quantum supremacy has been reported, following Google’s celebrated announcement from last year, and the first time it’s been done using photonics rather than superconducting qubits.

As the co-inventor of BosonSampling (with Alex Arkhipov), obviously I’m gratified about this.

For anyone who regards it as boring or obvious, here and here is Gil Kalai, on this blog, telling me why BosonSampling would never scale beyond 8-10 photons. (He wrote that, if aliens forced us to try, then much like with the Ramsey number R(6,6), our only hope would be to attack the aliens.) Here’s Kalai making a similar prediction, on the impossibility of quantum supremacy by BosonSampling or any other means, in his plenary address to the International Congress of Mathematicians two years ago.

Even if we set aside the quantum computing skeptics, many colleagues told me they thought experimental BosonSampling was a dead end, because of photon losses and the staggering difficulty of synchronizing 50-100 single-photon sources. They said that a convincing demonstration of quantum supremacy would have to await the arrival of quantum fault-tolerance—or at any rate, some hardware platform more robust than photonics. I always agreed that they might be right. Furthermore, even if 50-photon BosonSampling was possible, after Google reached the supremacy milestone first with superconducting qubits, it wasn’t clear if anyone would still bother. Even when I learned a year ago about the USTC group’s intention to go for it, I was skeptical, figuring I’d believe it when I saw it.

Obviously the new result isn’t dispositive. Nevertheless, as someone whose intellectual origins are close to pure math, it’s strange and exciting to find myself in a field where, once in a while, the world itself gets to weigh in on a theoretical disagreement.

Since excitement is best when paired with accurate understanding, please help yourself to the following FAQ, which I might add more to over the next couple days.

What is BosonSampling? You must be new here! Briefly, it’s a proposal for achieving quantum supremacy by simply passing identical, non-interacting photons through an array of beamsplitters, and then measuring where they end up. For more: in increasing order of difficulty, here’s an MIT News article from back in 2011, here’s the Wikipedia page, here are my PowerPoint slides, here are my lecture notes from Rio de Janeiro, and here’s my original paper with Arkhipov.

What is quantum supremacy? Roughly, the use of a programmable or configurable quantum computer to solve some well-defined computational problem much faster than we know how to solve it with any existing classical computer. “Quantum supremacy,” a term coined by John Preskill in 2012, does not mean useful QC, or scalable QC, or fault-tolerant QC, all of which remain outstanding challenges. For more, see my Supreme Quantum Supremacy FAQ, or (e.g.) my recent Lytle Lecture for the University of Washington.

If Google already announced quantum supremacy a year ago, what’s the point of this new experiment? To me, at least, quantum supremacy seems important enough to do at least twice! Also, as I said, this represents the first demonstration that quantum supremacy is possible via photonics. Finally, as the authors point out, the new experiment has one big technical advantage compared to Google’s: namely, many more possible output states (~1030 of them, rather than a mere ~9 quadrillion). This makes it infeasible to calculate the whole probability distribution over outputs and store it on a gigantic hard disk (after which one could easily generate as many samples as one wanted), which is what IBM proposed doing in its response to Google’s announcement.

Is BosonSampling a form of universal quantum computing? No, we don’t even think it can simulate universal classical computing! It’s designed for exactly one task: namely, demonstrating quantum supremacy and refuting Gil Kalai. It might have some other applications besides that, but if so, they’ll be icing on the cake. This is in contrast to Google’s Sycamore processor, which in principle is a universal quantum computer, just with a severe limit on the number of qubits (53) and how many layers of gates one can apply to them (about 20).

Is BosonSampling at least a step toward universal quantum computing? I think so! In 2000, Knill, Laflamme, and Milburn (KLM) famously showed that pure, non-interacting photons, passing through a network of beamsplitters, are capable of universal QC, provided we assume one extra thing: namely, the ability to measure the photons at intermediate times, and change which beamsplitters to apply to the remaining photons depending on the outcome. In other words, “BosonSampling plus adaptive measurements equals universality.” Basically, KLM is the holy grail that experimental optics groups around the world have been working toward for 20 years, with BosonSampling just a more achievable pit stop along the way.

Are there any applications of BosonSampling? We don’t know yet. There are proposals in the literature to apply BosonSampling to vibronic spectra in quantum chemistry, finding dense subgraphs, and other problems, but I’m not yet sure whether these proposals will yield real speedups over the best we can do with classical computers, for a task of practical interest that involves estimating specific numbers (as opposed to sampling tasks, where BosonSampling almost certainly does yield exponential speedups, but which are rarely the thing practitioners directly care about). [See this comment for further discussion of the issues regarding dense subgraphs.] In a completely different direction, one could try to use BosonSampling to generate cryptographically certified random bits, along the lines of my proposal from 2018, much like one could with qubit-based quantum circuits.

How hard is it to simulate BosonSampling on a classical computer? As far as we know today, the difficulty of simulating a “generic” BosonSampling experiment increases roughly like 2n, where n is the number of detected photons. It might be easier than that, particularly when noise and imperfections are taken into account; and at any rate it might be easier to spoof the statistical tests that one applies to verify the outputs. I and others managed to give some theoretical evidence against those possibilities, but just like with Google’s experiment, it’s conceivable that some future breakthrough will change the outlook and remove the case for quantum supremacy.

Do you have any amusing stories? When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there. This was by far the most expensive referee report I ever wrote!

Also: when Covid first started, and facemasks were plentiful in China but almost impossible to get in the US, Chao-Yang Lu, one of the leaders of the new work and my sometime correspondent on the theory of BosonSampling, decided to mail me a box of 200 masks (I didn’t ask for it). I don’t think that influenced my later review, but it was appreciated nonetheless.

Huge congratulations to the whole team for their accomplishment!