Archive for the ‘Quantum’ Category

Doubts about teapot supremacy: my reply to Richard Borcherds

Tuesday, April 20th, 2021

Richard Borcherds is a British mathematician at Berkeley, who won the 1998 Fields Medal for the proof of the monstrous moonshine conjecture among many other contributions. A couple months ago, Borcherds posted on YouTube a self-described “rant” about quantum computing, which was recently making the rounds on Facebook and which I found highly entertaining.

Borcherds points out that the term “quantum supremacy” means only that quantum computers can outperform existing classical computers on some benchmark, which can be chosen to show maximum advantage for the quantum computer. He allows that BosonSampling could have some value, for example in calibrating quantum computers or in comparing one quantum computer to another, but he decries the popular conflation of quantum supremacy with the actual construction of a scalable quantum computer able (for example) to run Shor’s algorithm to break RSA.

Borcherds also proposes a “teapot test,” according to which any claim about quantum computers can be dismissed if an analogous claim would hold for a teapot (which he brandishes for the camera). For example, there are many claims to solve practical optimization and machine learning problems by “quantum/classical hybrid algorithms,” wherein a classical computer does most of the work but a quantum computer is somehow involved. Borcherds points out that, at least as things stand in early 2021, in most or all such cases, the classical computer could’ve probably done as well entirely on its own. So then if you put a teapot on top of your classical computer while it ran, you could equally say you used a “classical/teapot hybrid approach.”

Needless to say, Borcherds is correct about all of this. I’ve made similar points on this blog for 15 years, although less Britishly. I’m delighted to have such serious new firepower on the scoffing-at-QC-hype team.

I do, however, have one substantive disagreement. At one point, Borcherds argues that sampling-based quantum supremacy itself fails his teapot test. For consider the computational problem of predicting how many pieces a teapot will break into if it’s dropped on the ground. Clearly, he says, the teapot itself will outperform any simulation running on any existing classical computer at that task, and will therefore achieve “teapot supremacy.” But who cares??

I’m glad that Borcherds has set out, rather crisply, an objection that’s been put to me many times over the past decade. The response is simple: I don’t believe the teapot really does achieve teapot supremacy on the stated task! At the least, I’d need to be shown why. You can’t just assert it without serious argument.

If we want to mirror the existing quantum supremacy experiments, then the teapot computational problem, properly formulated, should be: given as input a description of a teapot’s construction, the height from which it’s dropped, etc., output a sample from the probability distribution over the number of shards that the teapot will break into when it hits the floor.

If so, though, then clearly a classical computer can easily sample from the same distribution! Why? Because presumably we agree that there’s a negligible probability of more than (say) 1000 shards. So the distribution is characterized by a list of at most 1000 probabilities, which can be estimated empirically (at the cost of a small warehouse of smashed teapots) and thereafter used to generate samples. In the plausible event that the distribution is (say) a Gaussian, it’s even easier: just estimate the mean and variance.

A couple days ago, I was curious what the distribution looked like, so I decided to order some teapots from Amazon and check. Unfortunately, real porcelain teapots are expensive, and it seemed vaguely horrific to order dozens (as would be needed to get reasonable data) for the sole purpose of smashing them on my driveway. So I hit on what seemed like a perfect solution: I ordered toy teapots, which were much smaller and cheaper. Alas, when my toy “porcelain” teapots arrived yesterday, they turned out (unsurprisingly in retrospect for a children’s toy) to be some sort of plastic or composite material, meaning that they didn’t break unless one propelled them downward forcefully. So, while I can report that they tended to break into one or two large pieces along with two or three smaller shards, I found it impossible to get better data. (There’s a reason why I became a theoretical computer scientist…)

The good news is that my 4-year-old son had an absolute blast smashing toy teapots with me on our driveway, while my 8-year-old daughter was thrilled to take the remaining, unbroken teapots for her dollhouse. I apologize if this fails to defy gender stereotypes.

Anyway, it might be retorted that it’s not good enough to sample from a probability distribution: what’s wanted, rather, is to calculate how many pieces this specific teapot will break into, given all the microscopic details of it and its environment. Aha, this brings us to a crucial conceptual point: in order for something to count as an “input” to a computer, you need to be able to set it freely. Certainly, at the least, you need to be able to measure and record the input in its entirety, so that someone trying to reproduce your computation on a standard silicon computer would know exactly which computation to do. You don’t get to claim computational supremacy based on a problem with secret inputs: that’s like failing someone on a math test without having fully told them the problems.

Ability to set and know the inputs is the key property that’s satisfied by Google’s quantum supremacy experiment, and to a lesser extent by the USTC BosonSampling experiment, but that’s not satisfied at all by the “smash a teapot on the floor” experiment. Or perhaps it’s better to say: influences on a computation that vary uncontrollably and chaotically, like gusts of air hitting the teapot as it falls to the floor, shouldn’t be called “inputs” at all; they’re simply noise sources. And what one does with noise sources is to try to estimate their distribution and average over them—but in that case, as I said, there’s no teapot supremacy.

A Facebook friend said to me: that’s well and good, but surely we could change Borcherds’s teapot experiment to address this worry? For example: add a computer-controlled lathe (or even a 3D printer), with which you can build a teapot in an arbitrary shape of your choice. Then consider the problem of sampling from the probability distribution over how many pieces that teapot will smash into, when it’s dropped from some standard height onto some standard surface. I replied that this is indeed more interesting—in fact, it already seems more like what engineers do in practice (still, sometimes!) when building wind tunnels, than like a silly reductio ad absurdum of quantum supremacy experiments. On the other hand, if you believe the Extended Church-Turing Thesis, then as long as your analog computer is governed by classical physics, it’s presumably inherently limited to an Avogadro’s number type speedup over a standard digital computer, whereas with a quantum computer, you’re limited only by the exponential dimensionality of Hilbert space, which seems more interesting.

Or maybe I’m wrong—in which case, I look forward to the first practical demonstration of teapot supremacy! Just like with quantum supremacy, though, it’s not enough to assert it; you need to … put the tea where your mouth is.

Update: On the suggestion of Ernest Davis, who I can now reveal as the Facebook friend mentioned above, I just ordered some terra cotta flower pots, which look cheap, easily smashable, and environmentally friendly, and which will hopefully be acceptable substitutes for porcelain teapots in a new experiment. (Not that my main arguments in this post hinge on the results of such an experiment! That’s the power of theory.)

Another Update: Some of you might enjoy John Horgan’s Scientific American column on reality vs. hype in quantum computing, based on conversations with me and with Terry Rudolph of PsiQuantum.

The ACM Prize thing

Wednesday, April 14th, 2021

Last week I got an email from Dina Katabi, my former MIT colleague, asking me to call her urgently. Am I in trouble? For what, though?? I haven’t even worked at MIT for five years!

Luckily, Dina only wanted to tell me that I’d been selected to receive the 2020 ACM Prize in Computing, a mid-career award founded in 2007 that comes with $250,000 from Infosys. Not the Turing Award but I’d happily take it! And I could even look back on 2020 fondly for something.

I was utterly humbled to see the list of past ACM Prize recipients, which includes amazing computer scientists I’ve been privileged to know and learn from (like Jon Kleinberg, Sanjeev Arora, and Dan Boneh) and others who I’ve admired from afar (like Daphne Koller, Jeff Dean and Sanjay Ghemawat of Google MapReduce, and David Silver of AlphaGo and AlphaZero).

I was even more humbled, later, to read my prize citation, which focuses on four things:

  1. The theoretical foundations of the sampling-based quantum supremacy experiments now being carried out (and in particular, my and Alex Arkhipov’s 2011 paper on BosonSampling);
  2. My and Avi Wigderson’s 2008 paper on the algebrization barrier in complexity theory;
  3. Work on the limitations of quantum computers (in particular, the 2002 quantum lower bound for the collision problem); and
  4. Public outreach about quantum computing, including through QCSD, popular talks and articles, and this blog.

I don’t know if I’m worthy of such a prize—but I know that if I am, then it’s mainly for work I did between roughly 2001 and 2012. This honor inspires me to want to be more like I was back then, when I was driven, non-jaded, and obsessed with figuring out the contours of BQP and efficient computation in the physical universe. It makes me want to justify the ACM’s faith in me.

I’m grateful to the committee and nominators, and more broadly, to the whole quantum computing and theoretical computer science communities—which I “joined” in some sense around age 16, and which were the first communities where I ever felt like I belonged. I’m grateful to the mentors who made me what I am, especially Chris Lynch, Bart Selman, Lov Grover, Umesh Vazirani, Avi Wigderson, and (if he’ll allow me to include him) John Preskill. I’m grateful to the slightly older quantum computer scientists who I looked up to and tried to emulate, like Dorit Aharonov, Andris Ambainis, Ronald de Wolf, and John Watrous. I’m grateful to my wonderful colleagues at UT Austin, in the CS department and beyond. I’m grateful to my students and postdocs, the pride of my professional life. I’m grateful, of course, to my wife, parents, and kids.

By coincidence, my last post was also about prizes to theoretical computer scientists—in that case, two prizes that attracted controversy because of the recipient’s (or would-be recipient’s) political actions or views. It would understate matters to point out that not everyone has always agreed with everything I’ve said on this blog. I’m ridiculously lucky, and I know it, that even living through this polarized and tumultuous era, I never felt forced to choose between academic success and the freedom to speak my conscience in public under my real name. If there’s been one constant in my public stands, I’d like to think that—inspired by memories of my own years as an unknown, awkward, self-conscious teenager—it’s been my determination to nurture and protect talented young scientists, whatever they look like and wherever they come from. And I’ve tried to live up to that ideal in real life, and I welcome anyone’s scrutiny as to how well I’ve done.

What should I do with the prize money? I confess that my first instinct was to donate it, in its entirety, to some suitable charity—specifically, something that would make all the strangers who’ve attacked me on Twitter, Reddit, and so forth over the years realize that I’m fundamentally a good person. But I was talked out of this plan by my family, who pointed out that
(1) in all likelihood, nothing will make online strangers stop hating me,
(2) in any case this seems like a poor basis for making decisions, and
(3) if I really want to give others a say in what to do with the winnings, then why not everyone who’s stood by me and supported me?

So, beloved commenters! Please mention your favorite charitable causes below, especially weird ones that I wouldn’t have heard of otherwise. If I support their values, I’ll make a small donation from my prize winnings. Or a larger donation, especially if you donate yourself and challenge me to match. Whatever’s left after I get tired of donating will probably go to my kids’ college fund.

Update: And by an amusing coincidence, today is apparently “World Quantum Day”! I hope your Quantum Day is as pleasant as mine (and stable and coherent).

QC ethics and hype: the call is coming from inside the house

Saturday, March 20th, 2021

For years, I’d sometimes hear discussions about the ethics of quantum computing research. Quantum ethics!

When the debates weren’t purely semantic, over the propriety of terms like “quantum supremacy” or “ancilla qubit,” they were always about chin-strokers like “but what if cracking RSA encryption gives governments more power to surveil their citizens? or what if only a few big countries or companies get quantum computers, thereby widening the divide between haves and have-nots?” Which, OK, conceivably these will someday be issues. But, besides barely depending on any specific facts about quantum computing, these debates always struck me as oddly safe, because the moral dilemmas were so hypothetical and far removed from us in time.

I confess I may have even occasionally poked fun when asked to expound on quantum ethics. I may have commented that quantum computers probably won’t kill anyone unless a dilution refrigerator tips over onto their head. I may have asked forgiveness for feeding custom-designed oracles to BQP and QMA, without first consulting an ethics committee about the long-term effects on those complexity classes.

Now fate has punished me for my flippancy. These days, I really do feel like quantum computing research has become an ethical minefield—but not for any of the reasons mentioned previously. What’s new is that millions of dollars are now potentially available to quantum computing researchers, along with equity, stock options, and whatever else causes “ka-ching” sound effects and bulging eyes with dollar signs. And in many cases, to have a shot at such riches, all an expert needs to do is profess optimism that quantum computing will have revolutionary, world-changing applications and have them soon. Or at least, not object too strongly when others say that.

Some of today’s rhetoric will of course remind people of the D-Wave saga, which first brought this blog to prominence when it began in earnest in 2007. Quantum computers, we hear now as then, will soon leave the Earth’s fastest supercomputers in the dust. They’re going to harness superposition to try all the exponentially many possible solutions at once. They’ll crack the Traveling Salesman Problem, and will transform machine learning and AI beyond recognition. Meanwhile, simulations of quantum systems will be key to solving global warming and cancer.

Despite the parallels, though, this new gold rush doesn’t feel to me like the D-Wave one, which seems in retrospect like just a little dry run. If I had to articulate what’s new in one sentence, it’s that this time “the call is coming from inside the house.” Many of the companies making wildly overhyped claims are recognized leaders of the field. They have brilliant quantum computing theorists and experimentalists on their staff with impeccable research records. Some of those researchers are among my best friends. And even when I wince at the claims of near-term applications, in many cases (especially with quantum simulation) the claims aren’t obviously false—we won’t know for certain until we try it and see! It’s genuinely gotten harder to draw the line between defensible optimism and exaggerations verging on fraud.

Indeed, this time around virtually everyone in QC is “complicit” to a greater or lesser degree. I, too, have accepted compensation to consult on quantum computing topics, to give talks at hedge funds, and in a few cases to serve as a scientific adviser to quantum computing startups. I tell myself that, by 2021 standards, this stuff is all trivial chump change—a few thousands of dollars here or there, to expound on the same themes that I already discuss free of charge on this blog. I actually get paid to dispel hype, rather than propagate it! I tell myself that I’ve turned my back on the orders of magnitude more money available to those willing to hitch their scientific reputations to the aspirations of this or that specific QC company. (Yes, this blog, and my desire to preserve its intellectual independence and credibility, might well be costing me millions!)

But, OK, some would argue that accepting any money from QC companies or QC investors just puts you at the top of a slope with unabashed snake-oil salesmen at the bottom. With the commercialization of our field that started around 2015, there’s no bright line anymore marking the boundary between pure scientific curiosity and the pursuit of filthy lucre; it’s all just points along a continuum. I’m not sure that these people are wrong.

As some of you might’ve seen already, IonQ, the trapped-ion QC startup that originated from the University of Maryland, is poised to have the first-ever quantum computing IPO—a so-called “SPAC IPO,” which while I’m a financial ignoramus, apparently involves merging with a shell company and thereby bypassing the SEC’s normal IPO rules. Supposedly they’re seeking $650 million in new funding and a $2 billion market cap. If you want to see what IonQ is saying about QC to prospective investors, click here. Lacking any choice in the matter, I’ll probably say more about these developments in a future post.

Meanwhile, PsiQuantum, the Palo-Alto-based optical QC startup, has said that it’s soon going to leave “stealth mode.” And Amazon, Microsoft, Google, IBM, Honeywell, and other big players continue making large investments in QC—treating it, at least rhetorically, not at all like blue-sky basic research, but like a central part of their future business plans.

All of these companies have produced or funded excellent QC research. And of course, they’re all heterogeneous, composed of individuals who might vehemently disagree with each other about the near- or long-term prospects of QC. And yet all of them have, at various times, inspired reflections in me like the ones in this post.

I regret that this post has no clear conclusion. I’m still hashing things out, solicing thoughts from my readers and friends. Speaking of which: this coming Monday, March 22, at 8-10pm US Eastern time, I’ve decided to hold a discussion around these issues on Clubhouse—my “grand debut” on that app, and an opportunity to see whether I like it or not! My friend Adam Brown will moderate the discussion; other likely participants will be John Horgan, George Musser, Michael Nielsen, and Matjaž Leonardis. If you’re on Clubhouse, I hope to see you there!

Update (March 22): Read this comment by “FB” if you’d like to understand how we got to this point.

Long-delayed UT Austin Quantum Complexity Theory Student Project Showcase!

Thursday, March 11th, 2021

Back at MIT, whenever I taught my graduate course on Quantum Complexity Theory (see here for lecture notes), I had a tradition of showcasing the student projects on this blog: see here (Fall 2010), here (Fall 2012), here (Fall 2014). I was incredibly proud that, each time I taught, at least some of the projects led to publishable original research—sometimes highly significant research, like Paul Christiano’s work on quantum money (which led to my later paper with him), Shelby Kimmel’s work on quantum query complexity, Jenny Barry’s work on quantum partially observable Markov decision processes (“QOMDPs”), or Matt Coudron and Henry Yuen’s work on randomness expansion (which led to their later breakthrough in the subject).

Alas, after I moved to UT Austin, for some reason I discontinued the tradition of these blog-showcases—and inexcusably, I did this even though the wonderful new research results continued! Now that I’m teaching Quantum Complexity Theory at UT for the third time (via Zoom, of course), I decided that it was finally time to remedy this. To keep things manageable, this time I’m going to limit myself to research projects that began their lives in my course and that are already public on the arXiv (or in one case, that will soon be).

So please enjoy the following smorgasbord, from 2016 and 2019 iterations of my course! And if you have any questions about any of the projects—well, I’ll try to get the students to answer in the comments section! Thanks so much and congratulations to the students for their work.

From the Fall 2016 iteration of the course

William Hoza (project turned into a joint paper with Cole Graham), Universal Bell Correlations Do Not Exist.

We prove that there is no finite-alphabet nonlocal box that generates exactly those correlations that can be generated using a maximally entangled pair of qubits. More generally, we prove that if some finite-alphabet nonlocal box is strong enough to simulate arbitrary local projective measurements of a maximally entangled pair of qubits, then that nonlocal box cannot itself be simulated using any finite amount of entanglement. We also give a quantitative version of this theorem for approximate simulations, along with a corresponding upper bound.

Patrick Rall, Signed quantum weight enumerators characterize qubit magic state distillation.

Many proposals for fault-tolerant quantum computation require injection of ‘magic states’ to achieve a universal set of operations. Some qubit states are above a threshold fidelity, allowing them to be converted into magic states via ‘magic state distillation’, a process based on stabilizer codes from quantum error correction.
We define quantum weight enumerators that take into account the sign of the stabilizer operators. These enumerators completely describe the magic state distillation behavior when distilling T-type magic states. While it is straightforward to calculate them directly by counting exponentially many operator weights, it is also an NP-hard problem to compute them in general. This suggests that finding a family of distillation schemes with desired threshold properties is at least as hard as finding the weight distributions of a family of classical codes.
Additionally, we develop search algorithms fast enough to analyze all useful 5 qubit codes and some 7 qubit codes, finding no codes that surpass the best known threshold.

From the Spring 2019 iteration of the course

Ying-Hao Chen, 2-Local Hamiltonian with Low Complexity is QCMA-complete.

We prove that 2-Local Hamiltonian (2-LH) with Low Complexity problem is QCMA-complete by combining the results from the QMA-completeness of 2-LH and QCMA-completeness of 3-LH with Low Complexity. The idea is straightforward. It has been known that 2-LH is QMA-complete. By putting a low complexity constraint on the input state, we make the problem QCMA. Finally, we use similar arguments as in [Kempe, Kitaev, Regev] to show that all QCMA problems can be reduced to our proposed problem.

Jeremy Cook, On the relationships between Z-, C-, and H-local unitaries.

Quantum walk algorithms can speed up search of physical regions of space in both the discrete-time [arXiv:quant-ph/0402107] and continuous-time setting [arXiv:quant-ph/0306054], where the physical region of space being searched is modeled as a connected graph. In such a model, Aaronson and Ambainis [arXiv:quant-ph/0303041] provide three different criteria for a unitary matrix to act locally with respect to a graph, called Z-local, C-local, and H-local unitaries, and left the open question of relating these three locality criteria. Using a correspondence between continuous- and discrete-time quantum walks by Childs [arXiv:0810.0312], we provide a way to approximate N×N H-local unitaries with error δ using O(1/√δ,√NC-local unitaries, where the comma denotes the maximum of the two terms.

Joshua A. Cook, Approximating Unitary Preparations of Orthogonal Black Box States.

In this paper, I take a step toward answering the following question: for m different small circuits that compute m orthogonal n qubit states, is there a small circuit that will map m computational basis states to these m states without any input leaving any auxiliary bits changed. While this may seem simple, the constraint that auxiliary bits always be returned to 0 on any input (even ones besides the m we care about) led me to use sophisticated techniques. I give an approximation of such a unitary in the m = 2 case that has size polynomial in the approximation error, and the number of qubits n.

Sabee Grewal (project turned into a joint paper with me), Efficient Learning of Non-Interacting Fermion Distributions.

We give an efficient classical algorithm that recovers the distribution of a non-interacting fermion state over the computational basis. For a system of n non-interacting fermions and m modes, we show that O(m2n4log(m/δ)/ε4) samples and O(m4n4log(m/δ)/ε4) time are sufficient to learn the original distribution to total variation distance ε with probability 1−δ. Our algorithm empirically estimates the one- and two-mode correlations and uses them to reconstruct a succinct description of the entire distribution efficiently.

Sam Gunn and Niels Kornerup, Review of a Quantum Algorithm for Betti Numbers.

We looked into the algorithm for calculating Betti numbers presented by Lloyd, Garnerone, and Zanardi (LGZ). We present a new algorithm in the same spirit as LGZ with the intent of clarifying quantum algorithms for computing Betti numbers. Our algorithm is simpler and slightly more efficient than that presented by LGZ. We present a thorough analysis of our algorithm, pointing out reasons that both our algorithm and that presented by LGZ do not run in polynomial time for most inputs. However, the algorithms do run in polynomial time for calculating an approximation of the Betti number to polynomial multiplicative error, when applied to some class of graphs for which the Betti number is exponentially large.

William Kretschmer, Lower Bounding the AND-OR Tree via Symmetrization.

We prove a simple, nearly tight lower bound on the approximate degree of the two-level AND-OR tree using symmetrization arguments. Specifically, we show that ~deg(ANDm∘ORn)=Ω(~(mn)). To our knowledge, this is the first proof of this fact that relies on symmetrization exclusively; most other proofs involve the more complicated formulation of approximate degree as a linear program [BT13, She13, BDBGK18]. Our proof also demonstrates the power of a symmetrization technique involving Laurent polynomials (polynomials with negative exponents) that was previously introduced by Aaronson, Kothari, Kretschmer, and Thaler [AKKT19].

Jiahui Liu and Ruizhe Zhang (project turned into a joint paper with me, Mark Zhandry, and Qipeng Liu),
New Approaches for Quantum Copy-Protection.

Quantum copy protection uses the unclonability of quantum states to construct quantum software that provably cannot be pirated. Copy protection would be immensely useful, but unfortunately little is known about how to achieve it in general. In this work, we make progress on this goal, by giving the following results:
– We show how to copy protect any program that cannot be learned from its input/output behavior, relative to a classical oracle. This improves on Aaronson [CCC’09], which achieves the same relative to a quantum oracle. By instantiating the oracle with post-quantum candidate obfuscation schemes, we obtain a heuristic construction of copy protection.
– We show, roughly, that any program which can be watermarked can be copy detected, a weaker version of copy protection that does not prevent copying, but guarantees that any copying can be detected. Our scheme relies on the security of the assumed watermarking, plus the assumed existence of public key quantum money. Our construction is general, applicable to many recent watermarking schemes.

John Kallaugher, Triangle Counting in the Quantum Streaming Model. Not yet available but coming soon to an arXiv near you!

We give a quantum algorithm for counting triangles in graph streams that uses less space than the best possible classical algorithm.

Sayonara Majorana?

Wednesday, March 10th, 2021

Many of you have surely already seen the news that the Kouwenhoven group in Delft—which in 2018 published a paper in Nature claiming to have detected Majorana particles, a type of nonabelian anyon—have retracted the paper and apologized for “insufficient scientific rigour.” This work was considered one of the linchpins of Microsoft’s experimental effort toward building topological quantum computers.

Like most quantum computing theorists, I guess, I’m thrilled if Majorana particles can be created using existing technology, I’m sad if they can’t be, but I don’t have any special investment in or knowledge of the topic, beyond what I read in the news or hear from colleagues. Certainly Majorana particles seem neither necessary nor sufficient for building a scalable quantum computer, although they’d be a step forward for the topological approach to QC.

The purpose of this post is to invite informed scientific discussion of the relevant issues—first and foremost so that I can learn something, and second so that my readers can! I’d be especially interested to understand:

  1. Weren’t there, like, several other claims to have produced Majoranas? What of those then?
  2. If, today, no one has convincingly demonstrated the existence of Majoranas, then do people think it more likely that they were produced but not detected, or that they weren’t even produced?
  3. How credible are the explanations as to what went wrong?
  4. Are there any broader implications for the prospects of topological QC, or Microsoft’s path to topological QC, or was this just an isolated mistake?

Another axe swung at the Sycamore

Sunday, March 7th, 2021

So there’s an interesting new paper on the arXiv by Feng Pan and Pan Zhang, entitled “Simulating the Sycamore supremacy circuits.” It’s about a new tensor contraction strategy for classically simulating Google’s 53-qubit quantum supremacy experiment from Fall 2019. Using their approach, and using just 60 GPUs running for a few days, the authors say they managed to generate a million correlated 53-bit strings—meaning, strings that all agree on a specific subset of 20 or so bits—that achieve a high linear cross-entropy score.

Alas, I haven’t had time this weekend to write a “proper” blog post about this, but several people have by now emailed to ask my opinion, so I thought I’d share the brief response I sent to a journalist.

This does look like a significant advance on simulating Sycamore-like random quantum circuits! Since it’s based on tensor networks, you don’t need the literally largest supercomputer on the planet filling up tens of petabytes of hard disk space with amplitudes, as in the brute-force strategy proposed by IBM. Pan and Zhang’s strategy seems most similar to the strategy previously proposed by Alibaba, with the key difference being that the new approach generates millions of correlated samples rather than just one.

I guess my main thoughts for now are:

  1. Once you knew about this particular attack, you could evade it and get back to where we were before by switching to a more sophisticated verification test — namely, one where you not only computed a Linear XEB score for the observed samples, you also made sure that the samples didn’t share too many bits in common.  (Strangely, though, the paper never mentions this point.)
  2. The other response, of course, would just be to redo random circuit sampling with a slightly bigger quantum computer, like the ~70-qubit devices that Google, IBM, and others are now building!

Anyway, very happy for thoughts from anyone who knows more.

The Zen Anti-Interpretation of Quantum Mechanics

Thursday, March 4th, 2021

As I lay bedridden this week, knocked out by my second dose of the Moderna vaccine, I decided I should blog some more half-baked ideas because what the hell? It feels therapeutic, I have tenure, and anyone who doesn’t like it can close their broswer tab.

So: although I’ve written tens of thousands of words, on this blog and elsewhere, about interpretations of quantum mechanics, again and again I’ve dodged the question of which interpretation (if any) I really believe myself. Today, at last, I’ll emerge from the shadows and tell you precisely where I stand.

I hold that all interpretations of QM are just crutches that are better or worse at helping you along to the Zen realization that QM is what it is and doesn’t need an interpretation.  As Sidney Coleman famously argued, what needs reinterpretation is not QM itself, but all our pre-quantum philosophical baggage—the baggage that leads us to demand, for example, that a wavefunction |ψ⟩ either be “real” like a stubbed toe or else “unreal” like a dream. Crucially, because this philosophical baggage differs somewhat from person to person, the “best” interpretation—meaning, the one that leads most quickly to the desired Zen state—can also differ from person to person. Meanwhile, though, thousands of physicists (and chemists, mathematicians, quantum computer scientists, etc.) have approached the Zen state merely by spending decades working with QM, never worrying much about interpretations at all. This is probably the truest path; it’s just that most people lack the inclination, ability, or time.

Greg Kuperberg, one of the smartest people I know, once told me that the problem with the Many-Worlds Interpretation is not that it says anything wrong, but only that it’s “melodramatic” and “overwritten.” Greg is far along the Zen path, probably further than me.

You shouldn’t confuse the Zen Anti-Interpretation with “Shut Up And Calculate.” The latter phrase, mistakenly attributed to Feynman but really due to David Mermin, is something one might say at the beginning of the path, when one is as a baby. I’m talking here only about the endpoint of the path, which one can approach but never reach—the endpoint where you intuitively understand exactly what a Many-Worlder, Copenhagenist, or Bohmian would say about any given issue, and also how they’d respond to each other, and how they’d respond to the responses, etc. but after years of study and effort you’ve returned to the situation of the baby, who just sees the thing for what it is.

I don’t mean to say that the interpretations are all interchangeable, or equally good or bad. If you had to, you could call even me a “Many-Worlder,” but only in the following limited sense: that in fifteen years of teaching quantum information, my experience has consistently been that for most students, Everett’s crutch is the best one currently on the market. At any rate, it’s the one that’s the most like a straightforward picture of the equations, and the least like a wobbly tower of words that might collapse if you utter any wrong ones.  Unlike Bohr, Everett will never make you feel stupid for asking the questions an inquisitive child would ask; he’ll simply give you answers that are as clear, logical, and internally consistent as they are metaphysically extravagant. That’s a start.

The Copenhagen Interpretation retains a place of honor as the first crutch, for decades the only crutch, and the one closest to the spirit of positivism. Unfortunately, wielding the Copenhagen crutch requires mad philosophical skillz—which parts of the universe should you temporarily regard as “classical”? which questions should be answered, and which deflected?—to the point where, if you’re capable of all that verbal footwork, then why do you even need a crutch in the first place? In the hands of amateurs—meaning, alas, nearly everyone—Copenhagen often leads away from rather than toward the Zen state, as one sees with the generations of New-Age bastardizations about “observations creating reality.”

As for deBroglie-Bohm—well, that’s a weird, interesting, baroque crutch, one whose actual details (the preferred basis and the guiding equation) are historically contingent and tied to specific physical systems. It’s probably the right crutch for someone—it gets eternal credit for having led Bell to discover the Bell inequality—but its quirks definitely need to be discarded along the way.

Note that, among those who approach the Zen state, many might still call themselves Many-Worlders or Copenhagenists or Bohmians or whatever—just as those far along in spiritual enlightenment might still call themselves Buddhists or Catholics or Muslims or Jews (or atheists or agnostics)—even though, by that point, they might have more in common with each other than they do with their supposed coreligionists or co-irreligionists.

Alright, but isn’t all this Zen stuff just a way to dodge the actual, substantive questions about QM, by cheaply claiming to have transcended them? If that’s your charge, then please help yourself to the following FAQ about the details of the Zen Anti-Interpretation.

  1. What is a quantum state? It’s a unit vector of complex numbers (or if we’re talking about mixed states, then a trace-1, Hermitian, positive semidefinite matrix), which encodes everything there is to know about a physical system.
  2. OK, but are the quantum states “ontic” (really out in the world), or “epistemic” (only in our heads)? Dude. Do “basketball games” really exist, or is that just a phrase we use to summarize our knowledge about certain large agglomerations of interacting quarks and leptons? Do even the “quarks” and “leptons” exist, or are those just words for excitations of the more fundamental fields? Does “jealousy” exist? Pretty much all our concepts are complicated grab bags of “ontic” and “epistemic,” so it shouldn’t surprise us if quantum states are too. Bad dichotomy.
  3. Why are there probabilities in QM? Because QM is a (the?) generalization of probability theory to involve complex numbers, whose squared absolute values are probabilities. It includes probability as a special case.
  4. But why do the probabilities obey the Born rule? Because, once the unitary part of QM has picked out the 2-norm as being special, for the probabilities also to be governed by the 2-norm is pretty much the only possibility that makes mathematical sense; there are many nice theorems formalizing that intuition under reasonable assumptions.
  5. What is an “observer”? It’s exactly what modern decoherence theory says it is: a particular kind of quantum system that interacts with other quantum systems, becomes entangled with them, and thereby records information about them—reversibly in principle but irreversibly in practice.
  6. Can observers be manipulated in coherent superposition, as in the Wigner’s Friend scenario? If so, they’d be radically unlike any physical system we’ve ever had direct experience with. So, are you asking whether such “observers” would be conscious, or if so what they’d be conscious of? Who the hell knows?
  7. Do “other” branches of the wavefunction—ones, for example, where my life took a different course—exist in the same sense this one does? If you start with a quantum state for the early universe and then time-evolve it forward, then yes, you’ll get not only “our” branch but also a proliferation of other branches, in the overwhelming majority of which Donald Trump was never president and civilization didn’t grind to a halt because of a bat near Wuhan.  But how could we possibly know whether anything “breathes fire” into the other branches and makes them real, when we have no idea what breathes fire into this branch and makes it real? This is not a dodge—it’s just that a simple “yes” or “no” would fail to do justice to the enormity of such a question, which is above the pay grade of physics as it currently exists. 
  8. Is this it? Have you brought me to the end of the path of understanding QM? No, I’ve just pointed the way toward the beginning of the path. The most fundamental tenet of the Zen Anti-Interpretation is that there’s no shortcut to actually working through the Bell inequality, quantum teleportation, Shor’s algorithm, the Kochen-Specker and PBR theorems, possibly even a … photon or a hydrogen atom, so you can see quantum probability in action and be enlightened. I’m further along the path than I was twenty years ago, but not as far along as some of my colleagues. Even the greatest quantum Zen masters will be able to get further when new quantum phenomena and protocols are discovered in the future. All the same, though—and this is another major teaching of the Zen Anti-Interpretation—there’s more to life than achieving greater and greater clarity about the foundations of QM. And on that note…

To those who asked me about Claus Peter Schnorr’s claim to have discovered a fast classical factoring algorithm, thereby “destroying” (in his words) the RSA cryptosystem, see (e.g.) this Twitter thread by Keegan Ryan, which explains what certainly looks like a fatal error in Schnorr’s paper.

Research (by others) proceeds apace

Wednesday, January 27th, 2021

At age 39, I already feel more often than not like a washed-up has-been in complexity theory and quantum computing research. It’s not intelligence that I feel like I’ve lost, so much as two other necessary ingredients: burning motivation and time. But all is not lost: I still have students and postdocs to guide and inspire! I still have the people who email me every day—journalists, high-school kids, colleagues—asking this and that! Finally, I still have this blog, with which to talk about all the exciting research that others are doing!

Speaking of blogging about research: I know I ought to do more of it, so let me start right now.

  • Last night, Renou et al. posted a striking paper on the arXiv entitled Quantum physics needs complex numbers. One’s immediate reaction to the title might be “well duh … who ever thought it didn’t?” (See this post of mine for a survey of explanations for why quantum mechanics “should have” involved complex numbers.) Renou et al., however, are interested in ruling out a subtler possibility: namely, that our universe is secretly based on a version of quantum mechanics with real amplitudes only, and that it uses extra Hilbert space dimensions that we don’t see in order to simulate complex quantum mechanics. Strictly speaking, such a possibility can never be ruled out, any more than one can rule out the possibility that the universe is a classical computer that simulates quantum mechanics. In the latter case, though, the whole point of Bell’s Theorem is to show that if the universe is secretly classical, then it also needs to be radically nonlocal (relying on faster-than-light communication to coordinate measurement outcomes). Renou et al. claim to show something analogous about real quantum mechanics: there’s an experiment—as it happens, one involving three players and two entangled pairs—for which conventional QM predicts an outcome that can’t be explained using any variant of QM that’s both local and secretly based on real amplitudes. Their experiment seems eminently doable, and I imagine it will be done in short order.
  • A bunch of people from PsiQuantum posted a paper on the arXiv introducing “fusion-based quantum computation” (FBQC), a variant of measurement-based quantum computation (MBQC) and apparently a new approach to fault-tolerance, which the authors say can handle a ~10% rate of lost photons. PsiQuantum is the large, Palo-Alto-based startup trying to build scalable quantum computers based on photonics. They’ve been notoriously secretive, to the point of not having a website. I’m delighted that they’re sharing details of the sort of thing they hope to build; I hope and expect that the FBQC proposal will be evaluated by people more qualified than me.
  • Since this is already on social media: apparently, Marc Lackenby from Oxford will be giving a Zoom talk at UC Davis next week, about a quasipolynomial-time algorithm to decide whether a given knot is the unknot. A preprint doesn’t seem to be available yet, but this is a big deal if correct, on par with Babai’s quasipolynomial-time algorithm for graph isomorphism from four years ago (see this post). I can’t wait to see details! (Not that I’ll understand them well.)

Sufficiently amusing that I had no choice

Thursday, January 21st, 2021

Chinese BosonSampling experiment: the gloves are off

Wednesday, December 16th, 2020

Two weeks ago, I blogged about the striking claim, by the group headed by Chaoyang Lu and Jianwei Pan at USTC in China, to have achieved quantum supremacy via BosonSampling with 50-70 detected photons. I also did a four-part interview on the subject with Jonathan Tennenbaum at Asia Times, and other interviews elsewhere. None of that stopped some people, who I guess didn’t google, from writing to tell me how disappointed they were by my silence!

The reality, though, is that a lot has happened since the original announcement, so it’s way past time for an update.

I. The Quest to Spoof

Most importantly, other groups almost immediately went to work trying to refute the quantum supremacy claim, by finding some efficient classical algorithm to spoof the reported results. It’s important to understand that this is exactly how the process is supposed to work: as I’ve often stressed, a quantum supremacy claim is credible only if it’s open to the community to refute and if no one can. It’s also important to understand that, for reasons we’ll go into, there’s a decent chance that people will succeed in simulating the new experiment classically, although they haven’t yet. All parties to the discussion agree that the new experiment is, far and away, the closest any BosonSampling experiment has ever gotten to the quantum supremacy regime; the hard part is to figure out if it’s already there.

Part of me feels guilty that, as one of reviewers on the Science paper—albeit, one stressed and harried by kids and covid—it’s now clear that I didn’t exercise the amount of diligence that I could have, in searching for ways to kill the new supremacy claim. But another part of me feels that, with quantum supremacy claims, much like with proposals for new cryptographic codes, vetting can’t be the responsibility of one or two reviewers. Instead, provided the claim is serious—as this one obviously is—the only thing to do is to get the paper out, so that the entire community can then work to knock it down. Communication between authors and skeptics is also a hell of a lot faster when it doesn’t need to go through a journal’s editorial system.

Not surprisingly, one skeptic of the new quantum supremacy claim is Gil Kalai, who (despite Google’s result last year, which Gil still believes must be in error) rejects the entire possibility of quantum supremacy on quasi-metaphysical grounds. But other skeptics are current and former members of the Google team, including Sergio Boixo and John Martinis! And—pause to enjoy the irony—Gil has effectively teamed up with the Google folks on questioning the new claim. Another central figure in the vetting effort—one from whom I’ve learned much of what I know about the relevant issues over the last week—is Dutch quantum optics professor and frequent Shtetl-Optimized commenter Jelmer Renema.

Without further ado, why might the new experiment, impressive though it was, be efficiently simulable classically? A central reason for concern is photon loss: as Chaoyang Lu has now explicitly confirmed (it was implicit in the paper), up to ~70% of the photons get lost on their way through the beamsplitter network, leaving only ~30% to be detected. At least with “Fock state” BosonSampling—i.e., the original kind, the kind with single-photon inputs that Alex Arkhipov and I proposed in 2011—it seems likely to me that such a loss rate would be fatal for quantum supremacy; see for example this 2019 paper by Renema, Shchesnovich, and Garcia-Patron.

Incidentally, if anything’s become clear over the last two weeks, it’s that I, the co-inventor of BosonSampling, am no longer any sort of expert on the subject’s literature!

Anyway, one source of uncertainty regarding the photon loss issue is that, as I said in my last post, the USTC experiment implemented a 2016 variant of BosonSampling called Gaussian BosonSampling (GBS)—and Jelmer tells me that the computational complexity of GBS in the presence of losses hasn’t yet been analyzed in the relevant regime, though there’s been work aiming in that direction. A second source of uncertainty is simply that the classical simulations work in a certain limit—namely, fixing the rate of noise and then letting the numbers of photons and modes go to infinity—but any real experiment has a fixed number of photons and modes (in USTC’s case, they’re ~50 and ~100 respectively). It wouldn’t do to reject USTC’s claim via a theoretical asymptotic argument that would equally well apply to any non-error-corrected quantum supremacy demonstration!

OK, but if an efficient classical simulation of lossy GBS experiments exists, then what is it? How does it work? It turns out that we have a plausible candidate for the answer to that, originating with a 2014 paper by Gil Kalai and Guy Kindler. Given a beamsplitter network, Kalai and Kindler considered an infinite hierarchy of better and better approximations to the BosonSampling distribution for that network. Roughly speaking, at the first level (k=1), one pretends that the photons are just classical distinguishable particles. At the second level (k=2), one correctly models quantum interference involving pairs of photons, but none of the higher-order interference. At the third level (k=3), one correctly models three-photon interference, and so on until k=n (where n is the total number of photons), when one has reproduced the original BosonSampling distribution. At least when k is small, the time needed to spoof outputs at the kth level of the hierarchy should grow like nk. As theoretical computer scientists, Kalai and Kindler didn’t care whether their hierarchy produced any physically realistic kind of noise, but later work, by Shchesnovich, Renema, and others, showed that (as it happens) it does.

In its original paper, the USTC team ruled out the possibility that the first, k=1 level of this hierarchy could explain its experimental results. More recently, in response to inquiries by Sergio, Gil, Jelmer, and others, Chaoyang tells me they’ve ruled out the possibility that the k=2 level can explain their results either. We’re now eagerly awaiting the answer for larger values of k.

Let me add that I owe Gil Kalai the following public mea culpa. While his objections to QC have often struck me as unmotivated and weird, in the case at hand, Gil’s 2014 work with Kindler is clearly helping drive the scientific discussion forward. In other words, at least with BosonSampling, it turns out that Gil put his finger precisely on a key issue. He did exactly what every QC skeptic should do, and what I’ve always implored the skeptics to do.

II. BosonSampling vs. Random Circuit Sampling: A Tale of HOG and CHOG and LXEB

There’s a broader question: why should skeptics of a BosonSampling experiment even have to think about messy details like the rate of photon losses? Why shouldn’t that be solely the experimenters’ job?

To understand what I mean, consider the situation with Random Circuit Sampling, the task Google demonstrated last year with 53 qubits. There, the Google team simply collected the output samples and fed them into a benchmark that they called “Linear Cross-Entropy” (LXEB), closely related to what Lijie Chen and I called “Heavy Output Generation” (HOG) in a 2017 paper. With suitable normalization, an ideal quantum computer would achieve an LXEB score of 2, while classical random guessing would achieve an LXEB score of 1. Crucially, according to a 2019 result by me and Sam Gunn, under a plausible (albeit strong) complexity assumption, no subexponential-time classical spoofing algorithm should be able to achieve an LXEB score that’s even slightly higher than 1. In its experiment, Google reported an LXEB score of about 1.002, with a confidence interval much smaller than 0.002. Hence: quantum supremacy (subject to our computational assumption), with no further need to know anything about the sources of noise in Google’s chip! (More explicitly, Boixo, Smelyansky, and Neven did a calculation in 2017 to show that the Kalai-Kindler type of spoofing strategy definitely isn’t going to work against RCS and Linear XEB, with no computational assumption needed.)

So then why couldn’t the USTC team do something analogous with BosonSampling? Well, they tried to. They defined a measure that they called “HOG,” although it’s different from my and Lijie Chen’s HOG, more similar to a cross-entropy. Following Jelmer, let me call their measure CHOG, where the C could stand for Chinese, Chaoyang’s, or Changed. They calculated the CHOG for their experimental samples, and showed that it exceeds the CHOG that you’d get from the k=1 and k=2 levels of the Kalai-Kindler hierarchy, as well as from various other spoofing strategies, thereby ruling those out as classical explanations for their results.

The trouble is this: unlike with Random Circuit Sampling and LXEB, with BosonSampling and CHOG, we know that there are fast classical algorithms that achieve better scores than the trivial algorithm, the algorithm that just picks samples at random. That follows from Kalai and Kindler’s work, and it even more simply follows from a 2013 paper by me and Arkhipov, entitled “BosonSampling Is Far From Uniform.” Worse yet, with BosonSampling, we currently have no analogue of my 2019 result with Sam Gunn: that is, a result that would tell us (under suitable complexity assumptions) the highest possible CHOG score that we expect any efficient classical algorithm to be able to get. And since we don’t know exactly where that ceiling is, we can’t tell the experimentalists exactly what target they need to surpass in order to claim quantum supremacy. Absent such definitive guidance from us, the experimentalists are left playing whac-a-mole against this possible classical spoofing strategy, and that one, and that one.

This is an issue that I and others were aware of for years, although the new experiment has certainly underscored it. Had I understood just how serious the USTC group was about scaling up BosonSampling, and fast, I might’ve given the issue some more attention!

III. Fock vs. Gaussian BosonSampling

Above, I mentioned another complication in understanding the USTC experiment: namely, their reliance on Gaussian BosonSampling (GBS) rather than Fock BosonSampling (FBS), sometimes also called Aaronson-Arkhipov BosonSampling (AABS). Since I gave this issue short shrift in my previous post, let me make up for it now.

In FBS, the initial state consists of either 0 or 1 photons in each input mode, like so: |1,…,1,0,…,0⟩. We then pass the photons through our beamsplitter network, and measure the number of photons in each output mode. The result is that the amplitude of each possible output configuration can be expressed as the permanent of some n×n matrix, where n is the total number of photons. It was interest in the permanent, which plays a central role in classical computational complexity, that led me and Arkhipov to study BosonSampling in the first place.

The trouble is, preparing initial states like |1,…,1,0,…,0⟩ turns out to be really hard. No one has yet build a source that reliably outputs one and only one photon at exactly a specified time. This led two experimental groups to propose an idea that, in a 2013 post on this blog, I named Scattershot BosonSampling (SBS). In SBS, you get to use the more readily available “Spontaneous Parametric Down-Conversion” (SPDC) photon sources, which output superpositions over different numbers of photons, of the form $$\sum_{n=0}^{\infty} \alpha_n |n \rangle |n \rangle, $$ where αn decreases exponentially with n. You then measure the left half of each entangled pair, hope to see exactly one photon, and are guaranteed that if you do, then there’s also exactly one photon in the right half. Crucially, one can show that, if Fock BosonSampling is hard to simulate approximately using a classical computer, then the Scattershot kind must be as well.

OK, so what’s Gaussian BosonSampling? It’s simply the generalization of SBS where, instead of SPDC states, our input can be an arbitrary “Gaussian state”: for those in the know, a state that’s exponential in some quadratic polynomial in the creation operators. If there are m modes, then such a state requires ~m2 independent parameters to specify. The quantum optics people have a much easier time creating these Gaussian states than they do creating single-photon Fock states.

While the amplitudes in FBS are given by permanents of matrices (and thus, the probabilities by the absolute squares of permanents), the probabilities in GBS are given by a more complicated matrix function called the Hafnian. Roughly speaking, while the permanent counts the number of perfect matchings in a bipartite graph, the Hafnian counts the number of perfect matchings in an arbitrary graph. The permanent and the Hafnian are both #P-complete. In the USTC paper, they talk about yet another matrix function called the “Torontonian,” which was invented two years ago. I gather that the Torontonian is just the modification of the Hafnian for the situation where you only have “threshold detectors” (which decide whether one or more photons are present in a given mode), rather than “number-resolving detectors” (which count how many photons are present).

If Gaussian BosonSampling includes Scattershot BosonSampling as a special case, and if Scattershot BosonSampling is at least as hard to simulate classically as the original BosonSampling, then you might hope that GBS would also be at least as hard to simulate classically as the original BosonSampling. Alas, this doesn’t follow. Why not? Because for all we know, a random GBS instance might be a lot easier than a random SBS instance. Just because permanents can be expressed using Hafnians, doesn’t mean that a random Hafnian is as hard as a random permanent.

Nevertheless, I think it’s very likely that the sort of analysis Arkhipov and I did back in 2011 could be mirrored in the Gaussian case. I.e., instead of starting with reasonable assumptions about the distribution and hardness of random permanents, and then concluding the classical hardness of approximate BosonSampling, one would start with reasonable assumptions about the distribution and hardness of random Hafnians (or “Torontonians”), and conclude the classical hardness of approximate GBS. But this is theoretical work that remains to be done!

IV. Application to Molecular Vibronic Spectra?

In 2014, Alan Aspuru-Guzik and collaborators put out a paper that made an amazing claim: namely that, contrary to what I and others had said, BosonSampling was not an intrinsically useless model of computation, good only for refuting QC skeptics like Gil Kalai! Instead, they said, a BosonSampling device (specifically, what would later be called a GBS device) could be directly applied to solve a practical problem in quantum chemistry. This is the computation of “molecular vibronic spectra,” also known as “Franck-Condon profiles,” whatever those are.

I never understood nearly enough about chemistry to evaluate this striking proposal, but I was always a bit skeptical of it, for the following reason. Nothing in the proposal seemed to take seriously that BosonSampling is a sampling task! A chemist would typically have some specific numbers that she wants to estimate, of which these “vibronic spectra” seemed to be an example. But while it’s often convenient to estimate physical quantities via Monte Carlo sampling over simulated observations of the physical system you care about, that’s not the only way to estimate physical quantities! And worryingly, in all the other examples we’d seen where BosonSampling could be used to estimate a number, the same number could also be estimated using one of several polynomial-time classical algorithms invented by Leonid Gurvits. So why should vibronic spectra be an exception?

After an email exchange with Alex Arkhipov, Juan Miguel Arrazola, Leonardo Novo, and Raul Garcia-Patron, I believe we finally got to the bottom of it, and the answer is: vibronic spectra are not an exception.

In terms of BosonSampling, the vibronic spectra task is simply to estimate the probability histogram of some weighted sum like $$ w_1 s_1 + \cdots + w_ m s_m, $$ where w1,…,wm are fixed real numbers, and (s1,…,sm) is a possible outcome of the BosonSampling experiment, si representing the number of photons observed in mode i. Alas, while it takes some work, it turns out that Gurvits’s classical algorithms can be adapted to estimate these histograms. Granted, running the actual BosonSampling experiment would provide slightly more detailed information—namely, some exact sampled values of $$ w_1 s_1 + \cdots + w_ m s_m, $$ rather than merely additive approximations to the values—but since we’d still need to sort those sampled values into coarse “bins” in order to compute a histogram, it’s not clear why that additional precision would ever be of chemical interest.

This is a pity, since if the vibronic spectra application had beaten what was doable classically, then it would’ve provided not merely a first practical use for BosonSampling, but also a lovely way to verify that a BosonSampling device was working as intended.

V. Application to Finding Dense Subgraphs?

A different potential application of Gaussian BosonSampling, first suggested by the Toronto-based startup Xanadu, is finding dense subgraphs in a graph. (Or at least, providing an initial seed to classical optimization methods that search for dense subgraphs.)

This is an NP-hard problem, so to say that I was skeptical of the proposal would be a gross understatement. Nevertheless, it turns out that there is a striking observation by the Xanadu team at the core of their proposal: namely that, given a graph G and a positive even integer k, a GBS device can be used to sample a random subgraph of G of size k, with probability proportional to the square of the number of perfect matchings in that subgraph. Cool, right? And potentially even useful, especially if the number of perfect matchings could serve as a rough indicator of the subgraph’s density! Alas, Xanadu’s Juan Miguel Arrazola himself recently told me that there’s a cubic-time classical algorithm for the same sampling task, so that the possible quantum speedup that one could get from GBS in this way is at most polynomial. The search for a useful application of BosonSampling continues!


And that’s all for now! I’m grateful to all the colleagues I talked to over the last couple weeks, including Alex Arkhipov, Juan Miguel Arrazola, Sergio Boixo, Raul Garcia-Patron, Leonid Gurvits, Gil Kalai, Chaoyang Lu, John Martinis, and Jelmer Renema, while obviously taking sole responsibility for any errors in the above. I look forward to a spirited discussion in the comments, and of course I’ll post updates as I learn more!