Archive for the ‘Quantum’ Category

Slowly emerging from blog-hibervacation

Wednesday, July 21st, 2021

Alright everyone:

  1. Victor Galitski has an impassioned rant against out-of-control quantum computing hype, which I enjoyed and enthusiastically recommend, although I wished Galitski had engaged a bit more with the strongest arguments for optimism (e.g., the recent sampling-based supremacy experiments, the extrapolations that show gate fidelities crossing the fault-tolerance threshold within the next decade). Even if I’ve been saying similar things on this blog for 15 years, I clearly haven’t been doing so in a style that works for everyone. Quantum information needs as many people as possible who will tell the truth as best they see it, unencumbered by any competing interests, and has nothing legitimate to fear from that. The modern intersection of quantum theory and computer science has raised profound scientific questions that will be with us for decades to come. It’s a lily that need not be gilded with hype.
  2. Last month Limaye, Srinivasan, and Tavenas posted an exciting preprint to ECCC, which apparently proves the first (slightly) superpolynomial lower bound on the size of constant-depth arithmetic circuits, over fields of characteristic 0. Assuming it’s correct, this is another small victory in the generations-long war against the P vs. NP problem.
  3. I’m grateful to the Texas Democratic legislators who fled the state to prevent the legislature, a couple miles from my house, having a quorum to enact new voting restrictions, and who thereby drew national attention to the enormity of what’s at stake. It should go without saying that, if a minority gets to rule indefinitely by forcing through laws to suppress the votes of a majority that would otherwise unseat it, thereby giving itself the power to force through more such laws, etc., then we no longer live in a democracy but in a banana republic. And there’s no symmetry to the situation: no matter how terrified you (or I) might feel about wokeists and their denunciation campaigns, the Democrats have no comparable effort to suppress Republican votes. Alas, I don’t know of any solutions beyond the obvious one, of trying to deal the conspiracy-addled grievance party crushing defeats in 2022 and 2024.
  4. Added: Here’s the video of my recent Astral Codex Ten ask-me-anything session.

Open thread on new quantum supremacy claims

Sunday, July 4th, 2021

Happy 4th to those in the US!

The group of Chaoyang Lu and Jianwei Pan, based at USTC in China, has been on a serious quantum supremacy tear lately. Recall that last December, USTC announced the achievement of quantum supremacy via Gaussian BosonSampling, with 50-70 detected photons—the second claim of sampling-based quantum supremacy, after Google’s in Fall 2019. However, skeptics then poked holes in the USTC claim, showing how they could spoof the results with a classical computer, basically by reproducing the k-photon correlations for relatively small values of k. Debate over the details continues, but the Chinese group seeks to render the debate largely moot with a new and better Gaussian BosonSampling experiment, with 144 modes and up to 113 detected photons. They say they were able to measure k-photon correlations for k up to about 19, which if true would constitute a serious obstacle to the classical simulation strategies that people discussed for the previous experiment.

In the meantime, though, an overlapping group of authors had put out another paper the day before (!) reporting a sampling-based quantum supremacy experiment using superconducting qubits—extremely similar to what Google did (the same circuit depth and everything), except now with 56 qubits rather than 53.

I confess that I haven’t yet studied either paper in detail—among other reasons, because I’m on vacation with my family at the beach, and because I’m trying to spend what work-time I have on my own projects. But anyone who has read them, please use the comments of this post to discuss! Hopefully I’ll learn something.

To confine myself to some general comments: since Google’s announcement in Fall 2019, I’ve consistently said that sampling-based quantum supremacy is not yet a done deal. I’ve said that quantum supremacy seems important enough to want independent replications, and demonstrations in other hardware platforms like ion traps and photonics, and better gate fidelity, and better classical hardness, and better verification protocols. Most of all, I’ve said that we needed a genuine dialogue between the “quantum supremacists” and the classical skeptics: the former doing experiments and releasing all their data, the latter trying to design efficient classical simulations for those experiments, and so on in an iterative process. Just like in applied cryptography, we’d only have real confidence in a quantum supremacy claim once it had survived at least a few years of attacks by skeptics. So I’m delighted that this is precisely what’s now happening. USTC’s papers are two new volleys in this back-and-forth; we all eagerly await the next volley, whichever side it comes from.

While I’ve been trying for years to move away from the expectation that I blog about each and every QC announcement that someone messages me about, maybe I’ll also say a word about the recent announcement by IBM of a quantum advantage in space complexity (see here for popular article and here for arXiv preprint). There appears to be a nice theoretical result here, about the ability to evaluate any symmetric Boolean function with a single qubit in a branching-program-like model. I’d love to understand that result better. But to answer the question I received, this is another case where, once you know the protocol, you know both that the experiment can be done and exactly what its result will be (namely, the thing predicted by QM). So I think the interest is almost entirely in the protocol itself.

STOC’2021 and BosonSampling

Wednesday, June 23rd, 2021

Happy birthday to Alan Turing!

This week I’m participating virtually in STOC’2021, which today had a celebration of the 50th anniversary of NP-completeness (featuring Steve Cook, Richard Karp, Leonid Levin, Christos Papadimitriou, and Avi Wigderson), and which tomorrow will have a day’s worth of quantum computing content, including a tutorial on MIP*=RE, two quantum sessions, and an invited talk on quantum supremacy by John Martinis. I confess that I’m not a fan of GatherTown, the platform being used for STOC. Basically, you get a little avatar who wanders around a virtual hotel lobby and enters sessions—but it seems to reproduce all of the frustrating and annoying parts of experience without any of the good parts.

Ah! But I got the surprising news that Alex Arkhipov and I are among the winners of STOC’s first-ever “Test of Time Award,” for our paper on BosonSampling. It feels strange to win a “Test of Time” award for work that we did in 2011, which still seems like yesterday to me. All the more since the experimental status and prospects of quantum supremacy via BosonSampling are still very much live, unresolved questions.

Speaking of which: on Monday, Alexey Rubtsov, of the Skolkovo Institute in Moscow, gave a talk for our quantum information group meeting at UT, about his recent work with Popova on classically simulating Gaussian BosonSampling. From the talk, I learned something extremely important. I had imagined that their simulation must take advantage of the high rate of photon loss in actual experiments (like the USTC experiment from late 2020), because how else are you going to simulate BosonSampling efficiently? But Rubtsov explained that that’s not how it works at all. While their algorithm is heuristic and remains to be rigorously analyzed, numerical studies suggest that it works even with no photon losses or other errors. Having said that, their algorithm works:

  • only for Gaussian BosonSampling, not Fock-state BosonSampling (as Arkhipov and I had originally proposed),
  • only for threshold detectors, not photon-counting detectors, and
  • only for a small number of modes (say, linear in the number of photons), not for a large number of modes (say, quadratic in the number of photons) as in the original proposal.

So, bottom line, it now looks like the USTC experiment, amazing engineering achievement though it was, is not hard to spoof with a classical computer. If so, this is because of multiple ways in which the experiment differed from my and Arkhipov’s original theoretical proposal. We know exactly what those ways are—indeed, you can find them in my earlier blog posts on the subject—and hopefully they can be addressed in future experiments. All in all, then, we’re left with a powerful demonstration of the continuing relevance of formal hardness reductions, and the danger of replacing them with intuitions and “well, it still seems hard to me.” So I hope the committee won’t rescind my and Arkhipov’s Test of Time Award based on these developments in the past couple weeks!

More quantum computing popularization!

Tuesday, June 8th, 2021

I now have a feature article up at Quanta magazine, entitled “What Makes Quantum Computing So Hard To Explain?” I.e., why do journalists, investors, etc. so consistently get central points wrong, even after the subject has been in public consciousness for more than 25 years? Perhaps unsurprisingly, I found it hard to discuss that meta-level question, as Quanta‘s editors asked me to do, without also engaging in the object-level task of actually explaining QC. For regular Shtetl-Optimized readers, there will be nothing new here, but I’m happy with how the piece turned out.

Accompanying the Quanta piece is a 10-minute YouTube explainer on quantum computing, which (besides snazzy graphics) features interviews with me, John Preskill, and Dorit Aharonov.

On a different note, my colleague Mark Wilde has recorded a punk-rock song about BosonSampling. I can honestly report that it’s some of the finest boson-themed music I’ve heard in years. It includes the following lyrics:

Quantum computer, Ain’t no loser
Quantum computer, Quantum computer

People out on the streets
They don’t know what it is
They think it finds the cliques
Or finds graph colorings
But it don’t solve anything
Said it don’t solve anything
Bosonic slot machine
My lil’ photonic dream

Speaking of BosonSampling, A. S. Popova and A. N. Rubtsov, of the Skolkovo Institute in Moscow, have a new preprint entitled Cracking the Quantum Advantage threshold for Gaussian Boson Sampling. In it, they claim to give an efficient classical algorithm to simulate noisy GBS experiments, like the one six months ago from USTC in China. I’m still unsure how well this scales from 30-40 photons up to 50-70 photons; which imperfections of the USTC experiment are primarily being taken advantage of (photon losses?); and how this relates to the earlier proposed classical algorithms for simulating noisy BosonSampling, like the one by Kalai and Kindler. Anyone with any insight is welcome to share!

OK, one last announcement: the Simons Institute for the Theory of Computing, in Berkeley, has a new online lecture series called “Breakthroughs,” which many readers of this blog might want to check out.

Three updates

Tuesday, June 1st, 2021
  1. Hooray, I’m today’s “Featured ACM Member”! Which basically means, yet another interview with me about quantum computing, with questions including what’s most surprised me about the development of QC, and what students should do to get into the field.
  2. I’m proud to announce that An Automated Approach to the Collatz Conjecture, a paper by Emre Yolcu, myself, and Marijn Heule that we started working on over four years ago, is finally available on the arXiv, and will be presented at the 2021 Conference on Automated Deduction. Long story short: no, we didn’t prove Collatz, but we have an approach that can for the first time prove certain Collatz-like statements in a fully automated way, so hopefully that’s interesting! There was also a Quanta article even before our paper had come out (I wasn’t thrilled about the timing).
  3. The legendary Baba Brinkman has a new rap about quantum computing (hat tip to blog commenter YD). Having just watched the music video, I see it as one of the better popularization efforts our field has seen in the past 25 years—more coherent than the average journalistic account and with a much better backbeat. (I do, however, take a more guarded view than Brinkman of the potential applications, especially to e.g. autonomous driving and supply-chain optimization.)

In which I answer more quantum computing questions

Monday, May 24th, 2021

Yesterday, I had fun doing an open-ended Q&A at the Astral Codex Ten weekly online meetup. See here for the YouTube video. The questions were mainly about quantum computing, but ranged over various other topics as well, including education policy, the Great Stagnation, and what my biggest disagreements with Scott Alexander are.

In other news, last week I gave my talk about quantum supremacy at the (virtual, of course) Quantum Science Seminar, organized by Sebastian Blatt and others. See here for the YouTube video. Since I (alas) needed to leave after an hour sharp, the organizers asked if they could send me additional questions in writing and have me answer them. I said sure, as long as I could also post the Q&A on this blog! So without further ado…

Q: As you said, in computer science it’s difficult to prove that things are hard. From a computer science perspective, to what extent is “quantum supremacy” something absolute, and to what extent is it a shifting bar that depends on how good our classical hardware and algorithms are?

SCOTT: It’s kind of like the question “can computers beat the best humans at chess?” The latter question also involves a non-absolute shifting bar – maybe humans are getting better! Or maybe they simply haven’t figured out how to beat the computers yet! So how can we say “computer supremacy” has been decisively achieved? Nevertheless, one can clearly see a phase transition, with Deep Blue in 1996-1997, where the burden of proof shifted massively from one side to the other, and has stayed there ever since. Even if humans are getting better, the computers have also gotten better, and at such an insane rate that humans seem to have no chance of ever again catching up.

From a theoretical standpoint, that’s exactly what we might expect to happen with quantum supremacy experiments – simply because they involve tasks that have polynomial scaling for quantum computers, but (as far as we know) exponential scaling for classical computers, where each additional qubit roughly doubles the classical resources required. In analyzing concrete quantum supremacy experiments, I’d say that the goal is not to pin down the exact factor by which they’re beating this or that classical simulation (the answers will change rapidly anyway, and depend on all sorts of low-level details), but simply to figure out whether or not we’ve entered that phase transition.

Q: What do you generally think about improvements in tensor network methods as a challenge to quantum supremacy and the recent simulation of the supremacy result in Beijing?

SCOTT: I blogged about this here. It was a clever paper, which showed that if you focus narrowly on spoofing the linear cross-entropy benchmark used by Google, then there’s a classical algorithm to do that much faster than had been previously pointed out, by generating many samples that all share most of their bits in common (e.g., that all agree on the first 30 of the 53 bits). But many people remain unaware that, if you just changed the benchmark – for example, if you insisted that the returned samples not only pass the linear cross-entropy benchmark, but also be sufficiently different – then this classical spoofing strategy wouldn’t work and we’d be back to the previous status quo.

Q: Why do you need a random circuit to test the device, and also why is this more interesting/useful than doing something very specific many times to test the device?

SCOTT: The reasons to use random quantum circuits are simply that
(1) they generate complicated entangled states on all the qubits nearly as rapidly as it’s possible to do so – indeed, we now have theoretical results that give a detailed understanding of this process, and
(2) random circuits seem to have about as little “usable structure” (which a classical simulation might exploit) as it’s possible to have.
Eventually, of course, we’d like to run actually useful circuits (say, those that arise in Shor’s factoring algorithm), which will typically have regular patterns and be extremely far from random! But then more qubits will be needed to get an advantage over classical computers. It’s not terribly surprising for the first advantage over classical to be via random quantum circuits, which in some sense “maximally exploit” the hardware resources available.

Q: Have you heard of/what do you think about the recent NP-verification experiment using a quantum optical setup from Paris?

SCOTT: That experiment was actually based on a protocol that Beigi, Fefferman, Drucker, Shor, and I proposed back in 2008. It’s crucial for people to understand that there’s no claimed speedup here for solving any NP-complete problem. We’re talking about a more arcane task: namely, proving to someone that an NP-complete problem has a solution by sending them a small number of qubits. Even there, the protocol depends on the ability to send two quantum states that are guaranteed not to be entangled with each other; also, the communication savings is “only” polynomial rather than exponential (in our original protocol, roughly √n qubits where n classical bits would have been needed). Nevertheless, it’s always fun to see a real experiment implementing some version of something that you worked out on paper, even if you already knew it would work!

Q: If the difficulty for classical simulation is related to the Hilbert space dimension, has it been formally proven that an ideal analog classical computer cannot outperform a quantum computer?

SCOTT: This is not a question of “formal proof” but of physics. In my view, we already know deep reasons why, in our universe, analog classical computers are unlikely to be able to do anything that can’t be efficiently simulated using a standard digital computer. (In other words, why analog classical computers don’t violate the “Extended Church-Turing Thesis.”) Those reasons have to do with nonlinear dynamics chaotically amplifying even the tiniest errors in an analog device. Or, if you really want to push this discussion to the bitter end, they have to do with the breakdown in our picture of a smooth spacetime that’s expected to occur at the Planck scale, of ~10-33 centimeters and ~10-43 seconds, for reasons of black hole thermodynamics and quantum gravity. Crucially, neither of these issues apply to quantum computation. The former doesn’t apply because of quantum error correction and fault-tolerance, which have the effect of “discretizing” continuous errors; while the latter doesn’t apply because as far as anyone knows today, quantum mechanics (unlike theories that assume a smooth spacetime) is exactly true.

Q: [In the comparison between quantum and classical computation], what do we mean by “classical resources” here? Do we mean something parochial like “resources obeying non-quantum laws that are available to us humans on Earth?” Or is any classical resource fair game, no matter its size and classical equations of motion? In that case, doesn’t demonstrating quantum supremacy require demonstrating that the quantum computer exceeds the capabilities of, say, a classical computer the size of the solar system that exploits some CTC (closed timelike curve)?

SCOTT: If CTCs were possible, that would be a revolution in physics even greater than quantum mechanics! Leaving CTCs aside, though, cosmology and quantum gravity seem to impose a limit of roughly 10122 on the number of bits (and the number of operations on the bits) that any computer that fit inside the observable universe could possibly have. And if you envision that cosmological computer as a classical one, then it shouldn’t take impossibly long for us to build quantum computers that can outperform it on some tasks: indeed, a device with ~400 qubits should already be enough! But of course we’re not there yet: with 50-60 qubits, QCs right now are “merely” challenging the largest classical computers currently available on earth (again, on contrived sampling tasks), rather than the largest that could fit in the observable universe.

Q: Why is a quantum state (superposition or entangled state) inherently more fragile than a classical state?

SCOTT: Because when information from the state (say, whether a qubit is 0 or 1, or which path a photon takes through a beamsplitter network) “leaks out” into the environment, the information effectively becomes entangled with the environment, which damages the state in a measurable way. Indeed, it now appears to an observer as a mixed state rather than a pure state, so that interference between the different components can no longer happen. This is a fundamental, justly-famous feature of quantum information that’s not shared by classical information.

Q: Given that we have noisy-intermediate scale quantum devices, what do you see as the fundamental role of noise in the discussion on quantum advantage or quantum supremacy. Is it simply a question of less noise is better, or are there things that cannot be done if there is noise?

SCOTT: There will always be some noise. The big question, about any given platform, is whether the noise is low enough that you can start usefully error-correcting it away, or whether the noise is so high that there’s no point (i.e., whether you’re above or below the “fault-tolerance threshold”). Until you start doing error-correction, most experts believe there’s a severe limit to how far you can scale: probably to quantum computations involving a few hundred qubits at most. Whereas once you have error-correction, at least in principle the sky’s the limit.

Q: You used the Wright brothers as an example where an airplane that was not practically useful itself pioneered the path to useful flight. In what sense to the sampling experiments of Google and USTC also pioneer that path for what we need for the future of quantum computing, or to what extent do you see them as niche examples of quantum supremacy?

SCOTT: I think the majority of what Google did, in integrating 53 superconducting qubits, making them programmable, etc. – and especially in characterizing the noise in a system at that scale – will be directly useful going forward. Indeed, that’s a large part of why they did it! Likewise, a lot of what USTC did, in integrating hundreds of beamsplitters, photon sources, and photodetectors, could be directly relevant to building a universal optical quantum computer. On the other hand, it’s true that both groups took shortcuts with the immediate goal of quantum supremacy in mind. As an example, Google’s chip uses a particular 2-qubit gate that was chosen, not because it shows up naturally in any application, but simply because it’s extra-hard to simulate using tensor network contraction algorithms, so it let them get to quantum supremacy faster than if they’d used a more conventional 2-qubit gate like the CNOT.

Q: To what extent does the measured circuit fidelity of 0.2% in the Google experiment limit the usability of this system for other computations?

SCOTT: Oh, we don’t know of anything particularly useful to do with Google’s Sycamore chip – that is, anything that you couldn’t do much more easily without it – other than
(1) quantum supremacy demonstrations,
(2) possibly the generation of cryptographically certified random bits, and
(3) of course, calibration experiments that tell you about the behavior of integrated superconducting qubits and thereby help you iterate to the next device.
But things are developing rapidly – the best circuit fidelity that was achievable in 2019 is not necessarily the best now, or the best that will be achieved in another year or two.

Update (May 25): Please, no new questions; just discussion of the existing questions and answers! I had hoped to get some work done today; I hadn’t planned on another ask-me-anything session. Thanks!

Doubts about teapot supremacy: my reply to Richard Borcherds

Tuesday, April 20th, 2021

Richard Borcherds is a British mathematician at Berkeley, who won the 1998 Fields Medal for the proof of the monstrous moonshine conjecture among many other contributions. A couple months ago, Borcherds posted on YouTube a self-described “rant” about quantum computing, which was recently making the rounds on Facebook and which I found highly entertaining.

Borcherds points out that the term “quantum supremacy” means only that quantum computers can outperform existing classical computers on some benchmark, which can be chosen to show maximum advantage for the quantum computer. He allows that BosonSampling could have some value, for example in calibrating quantum computers or in comparing one quantum computer to another, but he decries the popular conflation of quantum supremacy with the actual construction of a scalable quantum computer able (for example) to run Shor’s algorithm to break RSA.

Borcherds also proposes a “teapot test,” according to which any claim about quantum computers can be dismissed if an analogous claim would hold for a teapot (which he brandishes for the camera). For example, there are many claims to solve practical optimization and machine learning problems by “quantum/classical hybrid algorithms,” wherein a classical computer does most of the work but a quantum computer is somehow involved. Borcherds points out that, at least as things stand in early 2021, in most or all such cases, the classical computer could’ve probably done as well entirely on its own. So then if you put a teapot on top of your classical computer while it ran, you could equally say you used a “classical/teapot hybrid approach.”

Needless to say, Borcherds is correct about all of this. I’ve made similar points on this blog for 15 years, although less Britishly. I’m delighted to have such serious new firepower on the scoffing-at-QC-hype team.

I do, however, have one substantive disagreement. At one point, Borcherds argues that sampling-based quantum supremacy itself fails his teapot test. For consider the computational problem of predicting how many pieces a teapot will break into if it’s dropped on the ground. Clearly, he says, the teapot itself will outperform any simulation running on any existing classical computer at that task, and will therefore achieve “teapot supremacy.” But who cares??

I’m glad that Borcherds has set out, rather crisply, an objection that’s been put to me many times over the past decade. The response is simple: I don’t believe the teapot really does achieve teapot supremacy on the stated task! At the least, I’d need to be shown why. You can’t just assert it without serious argument.

If we want to mirror the existing quantum supremacy experiments, then the teapot computational problem, properly formulated, should be: given as input a description of a teapot’s construction, the height from which it’s dropped, etc., output a sample from the probability distribution over the number of shards that the teapot will break into when it hits the floor.

If so, though, then clearly a classical computer can easily sample from the same distribution! Why? Because presumably we agree that there’s a negligible probability of more than (say) 1000 shards. So the distribution is characterized by a list of at most 1000 probabilities, which can be estimated empirically (at the cost of a small warehouse of smashed teapots) and thereafter used to generate samples. In the plausible event that the distribution is (say) a Gaussian, it’s even easier: just estimate the mean and variance.

A couple days ago, I was curious what the distribution looked like, so I decided to order some teapots from Amazon and check. Unfortunately, real porcelain teapots are expensive, and it seemed vaguely horrific to order dozens (as would be needed to get reasonable data) for the sole purpose of smashing them on my driveway. So I hit on what seemed like a perfect solution: I ordered toy teapots, which were much smaller and cheaper. Alas, when my toy “porcelain” teapots arrived yesterday, they turned out (unsurprisingly in retrospect for a children’s toy) to be some sort of plastic or composite material, meaning that they didn’t break unless one propelled them downward forcefully. So, while I can report that they tended to break into one or two large pieces along with two or three smaller shards, I found it impossible to get better data. (There’s a reason why I became a theoretical computer scientist…)

The good news is that my 4-year-old son had an absolute blast smashing toy teapots with me on our driveway, while my 8-year-old daughter was thrilled to take the remaining, unbroken teapots for her dollhouse. I apologize if this fails to defy gender stereotypes.

Anyway, it might be retorted that it’s not good enough to sample from a probability distribution: what’s wanted, rather, is to calculate how many pieces this specific teapot will break into, given all the microscopic details of it and its environment. Aha, this brings us to a crucial conceptual point: in order for something to count as an “input” to a computer, you need to be able to set it freely. Certainly, at the least, you need to be able to measure and record the input in its entirety, so that someone trying to reproduce your computation on a standard silicon computer would know exactly which computation to do. You don’t get to claim computational supremacy based on a problem with secret inputs: that’s like failing someone on a math test without having fully told them the problems.

Ability to set and know the inputs is the key property that’s satisfied by Google’s quantum supremacy experiment, and to a lesser extent by the USTC BosonSampling experiment, but that’s not satisfied at all by the “smash a teapot on the floor” experiment. Or perhaps it’s better to say: influences on a computation that vary uncontrollably and chaotically, like gusts of air hitting the teapot as it falls to the floor, shouldn’t be called “inputs” at all; they’re simply noise sources. And what one does with noise sources is to try to estimate their distribution and average over them—but in that case, as I said, there’s no teapot supremacy.

A Facebook friend said to me: that’s well and good, but surely we could change Borcherds’s teapot experiment to address this worry? For example: add a computer-controlled lathe (or even a 3D printer), with which you can build a teapot in an arbitrary shape of your choice. Then consider the problem of sampling from the probability distribution over how many pieces that teapot will smash into, when it’s dropped from some standard height onto some standard surface. I replied that this is indeed more interesting—in fact, it already seems more like what engineers do in practice (still, sometimes!) when building wind tunnels, than like a silly reductio ad absurdum of quantum supremacy experiments. On the other hand, if you believe the Extended Church-Turing Thesis, then as long as your analog computer is governed by classical physics, it’s presumably inherently limited to an Avogadro’s number type speedup over a standard digital computer, whereas with a quantum computer, you’re limited only by the exponential dimensionality of Hilbert space, which seems more interesting.

Or maybe I’m wrong—in which case, I look forward to the first practical demonstration of teapot supremacy! Just like with quantum supremacy, though, it’s not enough to assert it; you need to … put the tea where your mouth is.

Update: On the suggestion of Ernest Davis, who I can now reveal as the Facebook friend mentioned above, I just ordered some terra cotta flower pots, which look cheap, easily smashable, and environmentally friendly, and which will hopefully be acceptable substitutes for porcelain teapots in a new experiment. (Not that my main arguments in this post hinge on the results of such an experiment! That’s the power of theory.)

Another Update: Some of you might enjoy John Horgan’s Scientific American column on reality vs. hype in quantum computing, based on conversations with me and with Terry Rudolph of PsiQuantum.

The ACM Prize thing

Wednesday, April 14th, 2021

Last week I got an email from Dina Katabi, my former MIT colleague, asking me to call her urgently. Am I in trouble? For what, though?? I haven’t even worked at MIT for five years!

Luckily, Dina only wanted to tell me that I’d been selected to receive the 2020 ACM Prize in Computing, a mid-career award founded in 2007 that comes with $250,000 from Infosys. Not the Turing Award but I’d happily take it! And I could even look back on 2020 fondly for something.

I was utterly humbled to see the list of past ACM Prize recipients, which includes amazing computer scientists I’ve been privileged to know and learn from (like Jon Kleinberg, Sanjeev Arora, and Dan Boneh) and others who I’ve admired from afar (like Daphne Koller, Jeff Dean and Sanjay Ghemawat of Google MapReduce, and David Silver of AlphaGo and AlphaZero).

I was even more humbled, later, to read my prize citation, which focuses on four things:

  1. The theoretical foundations of the sampling-based quantum supremacy experiments now being carried out (and in particular, my and Alex Arkhipov’s 2011 paper on BosonSampling);
  2. My and Avi Wigderson’s 2008 paper on the algebrization barrier in complexity theory;
  3. Work on the limitations of quantum computers (in particular, the 2002 quantum lower bound for the collision problem); and
  4. Public outreach about quantum computing, including through QCSD, popular talks and articles, and this blog.

I don’t know if I’m worthy of such a prize—but I know that if I am, then it’s mainly for work I did between roughly 2001 and 2012. This honor inspires me to want to be more like I was back then, when I was driven, non-jaded, and obsessed with figuring out the contours of BQP and efficient computation in the physical universe. It makes me want to justify the ACM’s faith in me.

I’m grateful to the committee and nominators, and more broadly, to the whole quantum computing and theoretical computer science communities—which I “joined” in some sense around age 16, and which were the first communities where I ever felt like I belonged. I’m grateful to the mentors who made me what I am, especially Chris Lynch, Bart Selman, Lov Grover, Umesh Vazirani, Avi Wigderson, and (if he’ll allow me to include him) John Preskill. I’m grateful to the slightly older quantum computer scientists who I looked up to and tried to emulate, like Dorit Aharonov, Andris Ambainis, Ronald de Wolf, and John Watrous. I’m grateful to my wonderful colleagues at UT Austin, in the CS department and beyond. I’m grateful to my students and postdocs, the pride of my professional life. I’m grateful, of course, to my wife, parents, and kids.

By coincidence, my last post was also about prizes to theoretical computer scientists—in that case, two prizes that attracted controversy because of the recipient’s (or would-be recipient’s) political actions or views. It would understate matters to point out that not everyone has always agreed with everything I’ve said on this blog. I’m ridiculously lucky, and I know it, that even living through this polarized and tumultuous era, I never felt forced to choose between academic success and the freedom to speak my conscience in public under my real name. If there’s been one constant in my public stands, I’d like to think that—inspired by memories of my own years as an unknown, awkward, self-conscious teenager—it’s been my determination to nurture and protect talented young scientists, whatever they look like and wherever they come from. And I’ve tried to live up to that ideal in real life, and I welcome anyone’s scrutiny as to how well I’ve done.

What should I do with the prize money? I confess that my first instinct was to donate it, in its entirety, to some suitable charity—specifically, something that would make all the strangers who’ve attacked me on Twitter, Reddit, and so forth over the years realize that I’m fundamentally a good person. But I was talked out of this plan by my family, who pointed out that
(1) in all likelihood, nothing will make online strangers stop hating me,
(2) in any case this seems like a poor basis for making decisions, and
(3) if I really want to give others a say in what to do with the winnings, then why not everyone who’s stood by me and supported me?

So, beloved commenters! Please mention your favorite charitable causes below, especially weird ones that I wouldn’t have heard of otherwise. If I support their values, I’ll make a small donation from my prize winnings. Or a larger donation, especially if you donate yourself and challenge me to match. Whatever’s left after I get tired of donating will probably go to my kids’ college fund.

Update: And by an amusing coincidence, today is apparently “World Quantum Day”! I hope your Quantum Day is as pleasant as mine (and stable and coherent).

QC ethics and hype: the call is coming from inside the house

Saturday, March 20th, 2021

For years, I’d sometimes hear discussions about the ethics of quantum computing research. Quantum ethics!

When the debates weren’t purely semantic, over the propriety of terms like “quantum supremacy” or “ancilla qubit,” they were always about chin-strokers like “but what if cracking RSA encryption gives governments more power to surveil their citizens? or what if only a few big countries or companies get quantum computers, thereby widening the divide between haves and have-nots?” Which, OK, conceivably these will someday be issues. But, besides barely depending on any specific facts about quantum computing, these debates always struck me as oddly safe, because the moral dilemmas were so hypothetical and far removed from us in time.

I confess I may have even occasionally poked fun when asked to expound on quantum ethics. I may have commented that quantum computers probably won’t kill anyone unless a dilution refrigerator tips over onto their head. I may have asked forgiveness for feeding custom-designed oracles to BQP and QMA, without first consulting an ethics committee about the long-term effects on those complexity classes.

Now fate has punished me for my flippancy. These days, I really do feel like quantum computing research has become an ethical minefield—but not for any of the reasons mentioned previously. What’s new is that millions of dollars are now potentially available to quantum computing researchers, along with equity, stock options, and whatever else causes “ka-ching” sound effects and bulging eyes with dollar signs. And in many cases, to have a shot at such riches, all an expert needs to do is profess optimism that quantum computing will have revolutionary, world-changing applications and have them soon. Or at least, not object too strongly when others say that.

Some of today’s rhetoric will of course remind people of the D-Wave saga, which first brought this blog to prominence when it began in earnest in 2007. Quantum computers, we hear now as then, will soon leave the Earth’s fastest supercomputers in the dust. They’re going to harness superposition to try all the exponentially many possible solutions at once. They’ll crack the Traveling Salesman Problem, and will transform machine learning and AI beyond recognition. Meanwhile, simulations of quantum systems will be key to solving global warming and cancer.

Despite the parallels, though, this new gold rush doesn’t feel to me like the D-Wave one, which seems in retrospect like just a little dry run. If I had to articulate what’s new in one sentence, it’s that this time “the call is coming from inside the house.” Many of the companies making wildly overhyped claims are recognized leaders of the field. They have brilliant quantum computing theorists and experimentalists on their staff with impeccable research records. Some of those researchers are among my best friends. And even when I wince at the claims of near-term applications, in many cases (especially with quantum simulation) the claims aren’t obviously false—we won’t know for certain until we try it and see! It’s genuinely gotten harder to draw the line between defensible optimism and exaggerations verging on fraud.

Indeed, this time around virtually everyone in QC is “complicit” to a greater or lesser degree. I, too, have accepted compensation to consult on quantum computing topics, to give talks at hedge funds, and in a few cases to serve as a scientific adviser to quantum computing startups. I tell myself that, by 2021 standards, this stuff is all trivial chump change—a few thousands of dollars here or there, to expound on the same themes that I already discuss free of charge on this blog. I actually get paid to dispel hype, rather than propagate it! I tell myself that I’ve turned my back on the orders of magnitude more money available to those willing to hitch their scientific reputations to the aspirations of this or that specific QC company. (Yes, this blog, and my desire to preserve its intellectual independence and credibility, might well be costing me millions!)

But, OK, some would argue that accepting any money from QC companies or QC investors just puts you at the top of a slope with unabashed snake-oil salesmen at the bottom. With the commercialization of our field that started around 2015, there’s no bright line anymore marking the boundary between pure scientific curiosity and the pursuit of filthy lucre; it’s all just points along a continuum. I’m not sure that these people are wrong.

As some of you might’ve seen already, IonQ, the trapped-ion QC startup that originated from the University of Maryland, is poised to have the first-ever quantum computing IPO—a so-called “SPAC IPO,” which while I’m a financial ignoramus, apparently involves merging with a shell company and thereby bypassing the SEC’s normal IPO rules. Supposedly they’re seeking $650 million in new funding and a $2 billion market cap. If you want to see what IonQ is saying about QC to prospective investors, click here. Lacking any choice in the matter, I’ll probably say more about these developments in a future post.

Meanwhile, PsiQuantum, the Palo-Alto-based optical QC startup, has said that it’s soon going to leave “stealth mode.” And Amazon, Microsoft, Google, IBM, Honeywell, and other big players continue making large investments in QC—treating it, at least rhetorically, not at all like blue-sky basic research, but like a central part of their future business plans.

All of these companies have produced or funded excellent QC research. And of course, they’re all heterogeneous, composed of individuals who might vehemently disagree with each other about the near- or long-term prospects of QC. And yet all of them have, at various times, inspired reflections in me like the ones in this post.

I regret that this post has no clear conclusion. I’m still hashing things out, solicing thoughts from my readers and friends. Speaking of which: this coming Monday, March 22, at 8-10pm US Eastern time, I’ve decided to hold a discussion around these issues on Clubhouse—my “grand debut” on that app, and an opportunity to see whether I like it or not! My friend Adam Brown will moderate the discussion; other likely participants will be John Horgan, George Musser, Michael Nielsen, and Matjaž Leonardis. If you’re on Clubhouse, I hope to see you there!

Update (March 22): Read this comment by “FB” if you’d like to understand how we got to this point.

Long-delayed UT Austin Quantum Complexity Theory Student Project Showcase!

Thursday, March 11th, 2021

Back at MIT, whenever I taught my graduate course on Quantum Complexity Theory (see here for lecture notes), I had a tradition of showcasing the student projects on this blog: see here (Fall 2010), here (Fall 2012), here (Fall 2014). I was incredibly proud that, each time I taught, at least some of the projects led to publishable original research—sometimes highly significant research, like Paul Christiano’s work on quantum money (which led to my later paper with him), Shelby Kimmel’s work on quantum query complexity, Jenny Barry’s work on quantum partially observable Markov decision processes (“QOMDPs”), or Matt Coudron and Henry Yuen’s work on randomness expansion (which led to their later breakthrough in the subject).

Alas, after I moved to UT Austin, for some reason I discontinued the tradition of these blog-showcases—and inexcusably, I did this even though the wonderful new research results continued! Now that I’m teaching Quantum Complexity Theory at UT for the third time (via Zoom, of course), I decided that it was finally time to remedy this. To keep things manageable, this time I’m going to limit myself to research projects that began their lives in my course and that are already public on the arXiv (or in one case, that will soon be).

So please enjoy the following smorgasbord, from 2016 and 2019 iterations of my course! And if you have any questions about any of the projects—well, I’ll try to get the students to answer in the comments section! Thanks so much and congratulations to the students for their work.

From the Fall 2016 iteration of the course

William Hoza (project turned into a joint paper with Cole Graham), Universal Bell Correlations Do Not Exist.

We prove that there is no finite-alphabet nonlocal box that generates exactly those correlations that can be generated using a maximally entangled pair of qubits. More generally, we prove that if some finite-alphabet nonlocal box is strong enough to simulate arbitrary local projective measurements of a maximally entangled pair of qubits, then that nonlocal box cannot itself be simulated using any finite amount of entanglement. We also give a quantitative version of this theorem for approximate simulations, along with a corresponding upper bound.

Patrick Rall, Signed quantum weight enumerators characterize qubit magic state distillation.

Many proposals for fault-tolerant quantum computation require injection of ‘magic states’ to achieve a universal set of operations. Some qubit states are above a threshold fidelity, allowing them to be converted into magic states via ‘magic state distillation’, a process based on stabilizer codes from quantum error correction.
We define quantum weight enumerators that take into account the sign of the stabilizer operators. These enumerators completely describe the magic state distillation behavior when distilling T-type magic states. While it is straightforward to calculate them directly by counting exponentially many operator weights, it is also an NP-hard problem to compute them in general. This suggests that finding a family of distillation schemes with desired threshold properties is at least as hard as finding the weight distributions of a family of classical codes.
Additionally, we develop search algorithms fast enough to analyze all useful 5 qubit codes and some 7 qubit codes, finding no codes that surpass the best known threshold.

From the Spring 2019 iteration of the course

Ying-Hao Chen, 2-Local Hamiltonian with Low Complexity is QCMA-complete.

We prove that 2-Local Hamiltonian (2-LH) with Low Complexity problem is QCMA-complete by combining the results from the QMA-completeness of 2-LH and QCMA-completeness of 3-LH with Low Complexity. The idea is straightforward. It has been known that 2-LH is QMA-complete. By putting a low complexity constraint on the input state, we make the problem QCMA. Finally, we use similar arguments as in [Kempe, Kitaev, Regev] to show that all QCMA problems can be reduced to our proposed problem.

Jeremy Cook, On the relationships between Z-, C-, and H-local unitaries.

Quantum walk algorithms can speed up search of physical regions of space in both the discrete-time [arXiv:quant-ph/0402107] and continuous-time setting [arXiv:quant-ph/0306054], where the physical region of space being searched is modeled as a connected graph. In such a model, Aaronson and Ambainis [arXiv:quant-ph/0303041] provide three different criteria for a unitary matrix to act locally with respect to a graph, called Z-local, C-local, and H-local unitaries, and left the open question of relating these three locality criteria. Using a correspondence between continuous- and discrete-time quantum walks by Childs [arXiv:0810.0312], we provide a way to approximate N×N H-local unitaries with error δ using O(1/√δ,√NC-local unitaries, where the comma denotes the maximum of the two terms.

Joshua A. Cook, Approximating Unitary Preparations of Orthogonal Black Box States.

In this paper, I take a step toward answering the following question: for m different small circuits that compute m orthogonal n qubit states, is there a small circuit that will map m computational basis states to these m states without any input leaving any auxiliary bits changed. While this may seem simple, the constraint that auxiliary bits always be returned to 0 on any input (even ones besides the m we care about) led me to use sophisticated techniques. I give an approximation of such a unitary in the m = 2 case that has size polynomial in the approximation error, and the number of qubits n.

Sabee Grewal (project turned into a joint paper with me), Efficient Learning of Non-Interacting Fermion Distributions.

We give an efficient classical algorithm that recovers the distribution of a non-interacting fermion state over the computational basis. For a system of n non-interacting fermions and m modes, we show that O(m2n4log(m/δ)/ε4) samples and O(m4n4log(m/δ)/ε4) time are sufficient to learn the original distribution to total variation distance ε with probability 1−δ. Our algorithm empirically estimates the one- and two-mode correlations and uses them to reconstruct a succinct description of the entire distribution efficiently.

Sam Gunn and Niels Kornerup, Review of a Quantum Algorithm for Betti Numbers.

We looked into the algorithm for calculating Betti numbers presented by Lloyd, Garnerone, and Zanardi (LGZ). We present a new algorithm in the same spirit as LGZ with the intent of clarifying quantum algorithms for computing Betti numbers. Our algorithm is simpler and slightly more efficient than that presented by LGZ. We present a thorough analysis of our algorithm, pointing out reasons that both our algorithm and that presented by LGZ do not run in polynomial time for most inputs. However, the algorithms do run in polynomial time for calculating an approximation of the Betti number to polynomial multiplicative error, when applied to some class of graphs for which the Betti number is exponentially large.

William Kretschmer, Lower Bounding the AND-OR Tree via Symmetrization.

We prove a simple, nearly tight lower bound on the approximate degree of the two-level AND-OR tree using symmetrization arguments. Specifically, we show that ~deg(ANDm∘ORn)=Ω(~(mn)). To our knowledge, this is the first proof of this fact that relies on symmetrization exclusively; most other proofs involve the more complicated formulation of approximate degree as a linear program [BT13, She13, BDBGK18]. Our proof also demonstrates the power of a symmetrization technique involving Laurent polynomials (polynomials with negative exponents) that was previously introduced by Aaronson, Kothari, Kretschmer, and Thaler [AKKT19].

Jiahui Liu and Ruizhe Zhang (project turned into a joint paper with me, Mark Zhandry, and Qipeng Liu),
New Approaches for Quantum Copy-Protection.

Quantum copy protection uses the unclonability of quantum states to construct quantum software that provably cannot be pirated. Copy protection would be immensely useful, but unfortunately little is known about how to achieve it in general. In this work, we make progress on this goal, by giving the following results:
– We show how to copy protect any program that cannot be learned from its input/output behavior, relative to a classical oracle. This improves on Aaronson [CCC’09], which achieves the same relative to a quantum oracle. By instantiating the oracle with post-quantum candidate obfuscation schemes, we obtain a heuristic construction of copy protection.
– We show, roughly, that any program which can be watermarked can be copy detected, a weaker version of copy protection that does not prevent copying, but guarantees that any copying can be detected. Our scheme relies on the security of the assumed watermarking, plus the assumed existence of public key quantum money. Our construction is general, applicable to many recent watermarking schemes.

John Kallaugher, Triangle Counting in the Quantum Streaming Model. Not yet available but coming soon to an arXiv near you!

We give a quantum algorithm for counting triangles in graph streams that uses less space than the best possible classical algorithm.