On Guilt

June 10th, 2021

The other night Dana and I watched “The Internet’s Own Boy,” the 2014 documentary about the life and work of Aaron Swartz, which I’d somehow missed when it came out. Swartz, for anyone who doesn’t remember, was the child prodigy who helped create RSS and Reddit, who then became a campaigner for an open Internet, who was arrested for using a laptop in an MIT supply closet to download millions of journal articles and threatened with decades in prison, and who then committed suicide at age 26. I regret that I never knew Swartz, though he did once send me a fan email about Quantum Computing Since Democritus.

Say whatever you want about the tactical wisdom or the legality of Swartz’s actions; it seems inarguable to me that he was morally correct, that certain categories of information (e.g. legal opinions and taxpayer-funded scientific papers) need to be made freely available, and that sooner or later our civilization will catch up to Swartz and regard his position as completely obvious. The beautifully-made documentary filled me with rage and guilt not only that the world had failed Swartz, but that I personally had failed him.

At the time of Swartz’s arrest, prosecution, and suicide, I was an MIT CS professor who’d previously written in strong support of open access to scientific literature, and who had the platform of this blog. Had I understood what was going on with Swartz—had I taken the time to find out what was going on—I could have been in a good position to help organize a grassroots campaign to pressure the MIT administration to urge prosecutors to drop the case (like JSTOR had already done), which could plausibly have made a difference. As it was, I was preoccupied in those years with BosonSampling, getting married, etc., I didn’t bother to learn whether anything was being done or could be done about the Aaron Swartz matter, and then before I knew it, Swartz had joined Alan Turing in computer science’s pantheon of lost geniuses.

But maybe there was something deeper to my inaction. If I’d strongly defended the substance of what Swartz had done, it would’ve raised the question: why wasn’t I doing the same? Why was I merely complaining about paywalled journals from the comfort of my professor’s office, rather than putting my own freedom on the line like Swartz was? It was as though I had to put some psychological distance between myself and the situation, in order to justify my life choices to myself.

Even though I see the error in that way of “thinking,” it keeps recurring, keeps causing me to make choices that I feel guilt or at least regret about later. In February 2020, there were a few smart people saying that a new viral pneumonia from Wuhan was about to upend life on earth, but the people around me certainly weren’t acting that way, and I wasn’t acting that way either … and so, “for the sake of internal consistency,” I didn’t spend much time thinking about it or investigating it. After all, if the fears of a global pandemic had a good chance of being true, I should be dropping everything else and panicking, shouldn’t I? But I wasn’t dropping everything else and panicking … so how could the fears be true?

Then I publicly repented, and resolved not to make such an error again. And now, 15 months later, I realize that I have made such an error again.

All throughout the pandemic, I’d ask my friends, privately, why the hypothesis that the virus had accidentally leaked from the Wuhan Institute of Virology wasn’t being taken far more seriously, given what seemed like a shockingly strong prima facie case. But I didn’t discuss the lab leak scenario on this blog, except once in passing. I could say I didn’t discuss it because I’m not a virologist and I had nothing new to contribute. But I worry that I also didn’t discuss it because it seemed incompatible with my self-conception as a cautious scientist who’s skeptical of lurid coverups and conspiracies—and because I’d already spent my “weirdness capital” on other issues, and didn’t relish the prospect of being sneered at on social media yet again. Instead I simply waited for discussion of the lab leak hypothesis to become “safe” and “respectable,” as today it finally has, thanks to writers who were more courageous than I was. I became, basically, another sheep in one of the conformist herds that we rightly despise when we read about them in history.

(For all that, it’s still plausible to me that the virus had a natural origin after all. What’s become clear is simply that, even if so, the failure to take the possibility of a lab escape more seriously back when the trail of evidence was fresher will stand as a major intellectual scandal of our time.)

Sometimes people are wracked with guilt, but over completely different things than the world wants them to be wracked with guilt over. This was one of the great lessons that I learned from reading Richard Rhodes’s The Making of the Atomic Bomb. Many of the Manhattan Project physicists felt lifelong guilt, not that they’d participated in building the bomb, but only that they hadn’t finished the bomb by 1943, when it could have ended the war in Europe and the Holocaust.

On a much smaller scale, I suppose some readers would still like me to feel guilt about comment 171, or some of the other stuff I wrote about nerds, dating, and feminism … or if not that, then maybe about my defense of a two-state solution for Israel and Palestine, or of standardized tests and accelerated math programs, or maybe my vehement condemnation of Trump and his failed insurrection. Or any of the dozens of other times when I stood up and said something I actually believed, or when I recounted my experiences as accurately as I could. The truth is, though, I don’t.

Looking back—which, now that I’m 40, I confess is an increasingly large fraction of my time—the pattern seems consistent. I feel guilty, not for having stood up for what I strongly believed in, but for having failed to do so. This suggests that, if I want fewer regrets, then I should click “Publish” on more potentially controversial posts! I don’t know how to force myself to do that, but maybe this post itself is a step.

More quantum computing popularization!

June 8th, 2021

I now have a feature article up at Quanta magazine, entitled “What Makes Quantum Computing So Hard To Explain?” I.e., why do journalists, investors, etc. so consistently get central points wrong, even after the subject has been in public consciousness for more than 25 years? Perhaps unsurprisingly, I found it hard to discuss that meta-level question, as Quanta‘s editors asked me to do, without also engaging in the object-level task of actually explaining QC. For regular Shtetl-Optimized readers, there will be nothing new here, but I’m happy with how the piece turned out.

Accompanying the Quanta piece is a 10-minute YouTube explainer on quantum computing, which (besides snazzy graphics) features interviews with me, John Preskill, and Dorit Aharonov.

On a different note, my colleague Mark Wilde has recorded a punk-rock song about BosonSampling. I can honestly report that it’s some of the finest boson-themed music I’ve heard in years. It includes the following lyrics:

Quantum computer, Ain’t no loser
Quantum computer, Quantum computer

People out on the streets
They don’t know what it is
They think it finds the cliques
Or finds graph colorings
But it don’t solve anything
Said it don’t solve anything
Bosonic slot machine
My lil’ photonic dream

Speaking of BosonSampling, A. S. Popova and A. N. Rubtsov, of the Skolkovo Institute in Moscow, have a new preprint entitled Cracking the Quantum Advantage threshold for Gaussian Boson Sampling. In it, they claim to give an efficient classical algorithm to simulate noisy GBS experiments, like the one six months ago from USTC in China. I’m still unsure how well this scales from 30-40 photons up to 50-70 photons; which imperfections of the USTC experiment are primarily being taken advantage of (photon losses?); and how this relates to the earlier proposed classical algorithms for simulating noisy BosonSampling, like the one by Kalai and Kindler. Anyone with any insight is welcome to share!

OK, one last announcement: the Simons Institute for the Theory of Computing, in Berkeley, has a new online lecture series called “Breakthroughs,” which many readers of this blog might want to check out.

Three updates

June 1st, 2021
  1. Hooray, I’m today’s “Featured ACM Member”! Which basically means, yet another interview with me about quantum computing, with questions including what’s most surprised me about the development of QC, and what students should do to get into the field.
  2. I’m proud to announce that An Automated Approach to the Collatz Conjecture, a paper by Emre Yolcu, myself, and Marijn Heule that we started working on over four years ago, is finally available on the arXiv, and will be presented at the 2021 Conference on Automated Deduction. Long story short: no, we didn’t prove Collatz, but we have an approach that can for the first time prove certain Collatz-like statements in a fully automated way, so hopefully that’s interesting! There was also a Quanta article even before our paper had come out (I wasn’t thrilled about the timing).
  3. The legendary Baba Brinkman has a new rap about quantum computing (hat tip to blog commenter YD). Having just watched the music video, I see it as one of the better popularization efforts our field has seen in the past 25 years—more coherent than the average journalistic account and with a much better backbeat. (I do, however, take a more guarded view than Brinkman of the potential applications, especially to e.g. autonomous driving and supply-chain optimization.)

In which I answer more quantum computing questions

May 24th, 2021

Yesterday, I had fun doing an open-ended Q&A at the Astral Codex Ten weekly online meetup. See here for the YouTube video. The questions were mainly about quantum computing, but ranged over various other topics as well, including education policy, the Great Stagnation, and what my biggest disagreements with Scott Alexander are.

In other news, last week I gave my talk about quantum supremacy at the (virtual, of course) Quantum Science Seminar, organized by Sebastian Blatt and others. See here for the YouTube video. Since I (alas) needed to leave after an hour sharp, the organizers asked if they could send me additional questions in writing and have me answer them. I said sure, as long as I could also post the Q&A on this blog! So without further ado…

Q: As you said, in computer science it’s difficult to prove that things are hard. From a computer science perspective, to what extent is “quantum supremacy” something absolute, and to what extent is it a shifting bar that depends on how good our classical hardware and algorithms are?

SCOTT: It’s kind of like the question “can computers beat the best humans at chess?” The latter question also involves a non-absolute shifting bar – maybe humans are getting better! Or maybe they simply haven’t figured out how to beat the computers yet! So how can we say “computer supremacy” has been decisively achieved? Nevertheless, one can clearly see a phase transition, with Deep Blue in 1996-1997, where the burden of proof shifted massively from one side to the other, and has stayed there ever since. Even if humans are getting better, the computers have also gotten better, and at such an insane rate that humans seem to have no chance of ever again catching up.

From a theoretical standpoint, that’s exactly what we might expect to happen with quantum supremacy experiments – simply because they involve tasks that have polynomial scaling for quantum computers, but (as far as we know) exponential scaling for classical computers, where each additional qubit roughly doubles the classical resources required. In analyzing concrete quantum supremacy experiments, I’d say that the goal is not to pin down the exact factor by which they’re beating this or that classical simulation (the answers will change rapidly anyway, and depend on all sorts of low-level details), but simply to figure out whether or not we’ve entered that phase transition.

Q: What do you generally think about improvements in tensor network methods as a challenge to quantum supremacy and the recent simulation of the supremacy result in Beijing?

SCOTT: I blogged about this here. It was a clever paper, which showed that if you focus narrowly on spoofing the linear cross-entropy benchmark used by Google, then there’s a classical algorithm to do that much faster than had been previously pointed out, by generating many samples that all share most of their bits in common (e.g., that all agree on the first 30 of the 53 bits). But many people remain unaware that, if you just changed the benchmark – for example, if you insisted that the returned samples not only pass the linear cross-entropy benchmark, but also be sufficiently different – then this classical spoofing strategy wouldn’t work and we’d be back to the previous status quo.

Q: Why do you need a random circuit to test the device, and also why is this more interesting/useful than doing something very specific many times to test the device?

SCOTT: The reasons to use random quantum circuits are simply that
(1) they generate complicated entangled states on all the qubits nearly as rapidly as it’s possible to do so – indeed, we now have theoretical results that give a detailed understanding of this process, and
(2) random circuits seem to have about as little “usable structure” (which a classical simulation might exploit) as it’s possible to have.
Eventually, of course, we’d like to run actually useful circuits (say, those that arise in Shor’s factoring algorithm), which will typically have regular patterns and be extremely far from random! But then more qubits will be needed to get an advantage over classical computers. It’s not terribly surprising for the first advantage over classical to be via random quantum circuits, which in some sense “maximally exploit” the hardware resources available.

Q: Have you heard of/what do you think about the recent NP-verification experiment using a quantum optical setup from Paris?

SCOTT: That experiment was actually based on a protocol that Beigi, Fefferman, Drucker, Shor, and I proposed back in 2008. It’s crucial for people to understand that there’s no claimed speedup here for solving any NP-complete problem. We’re talking about a more arcane task: namely, proving to someone that an NP-complete problem has a solution by sending them a small number of qubits. Even there, the protocol depends on the ability to send two quantum states that are guaranteed not to be entangled with each other; also, the communication savings is “only” polynomial rather than exponential (in our original protocol, roughly √n qubits where n classical bits would have been needed). Nevertheless, it’s always fun to see a real experiment implementing some version of something that you worked out on paper, even if you already knew it would work!

Q: If the difficulty for classical simulation is related to the Hilbert space dimension, has it been formally proven that an ideal analog classical computer cannot outperform a quantum computer?

SCOTT: This is not a question of “formal proof” but of physics. In my view, we already know deep reasons why, in our universe, analog classical computers are unlikely to be able to do anything that can’t be efficiently simulated using a standard digital computer. (In other words, why analog classical computers don’t violate the “Extended Church-Turing Thesis.”) Those reasons have to do with nonlinear dynamics chaotically amplifying even the tiniest errors in an analog device. Or, if you really want to push this discussion to the bitter end, they have to do with the breakdown in our picture of a smooth spacetime that’s expected to occur at the Planck scale, of ~10-33 centimeters and ~10-43 seconds, for reasons of black hole thermodynamics and quantum gravity. Crucially, neither of these issues apply to quantum computation. The former doesn’t apply because of quantum error correction and fault-tolerance, which have the effect of “discretizing” continuous errors; while the latter doesn’t apply because as far as anyone knows today, quantum mechanics (unlike theories that assume a smooth spacetime) is exactly true.

Q: [In the comparison between quantum and classical computation], what do we mean by “classical resources” here? Do we mean something parochial like “resources obeying non-quantum laws that are available to us humans on Earth?” Or is any classical resource fair game, no matter its size and classical equations of motion? In that case, doesn’t demonstrating quantum supremacy require demonstrating that the quantum computer exceeds the capabilities of, say, a classical computer the size of the solar system that exploits some CTC (closed timelike curve)?

SCOTT: If CTCs were possible, that would be a revolution in physics even greater than quantum mechanics! Leaving CTCs aside, though, cosmology and quantum gravity seem to impose a limit of roughly 10122 on the number of bits (and the number of operations on the bits) that any computer that fit inside the observable universe could possibly have. And if you envision that cosmological computer as a classical one, then it shouldn’t take impossibly long for us to build quantum computers that can outperform it on some tasks: indeed, a device with ~400 qubits should already be enough! But of course we’re not there yet: with 50-60 qubits, QCs right now are “merely” challenging the largest classical computers currently available on earth (again, on contrived sampling tasks), rather than the largest that could fit in the observable universe.

Q: Why is a quantum state (superposition or entangled state) inherently more fragile than a classical state?

SCOTT: Because when information from the state (say, whether a qubit is 0 or 1, or which path a photon takes through a beamsplitter network) “leaks out” into the environment, the information effectively becomes entangled with the environment, which damages the state in a measurable way. Indeed, it now appears to an observer as a mixed state rather than a pure state, so that interference between the different components can no longer happen. This is a fundamental, justly-famous feature of quantum information that’s not shared by classical information.

Q: Given that we have noisy-intermediate scale quantum devices, what do you see as the fundamental role of noise in the discussion on quantum advantage or quantum supremacy. Is it simply a question of less noise is better, or are there things that cannot be done if there is noise?

SCOTT: There will always be some noise. The big question, about any given platform, is whether the noise is low enough that you can start usefully error-correcting it away, or whether the noise is so high that there’s no point (i.e., whether you’re above or below the “fault-tolerance threshold”). Until you start doing error-correction, most experts believe there’s a severe limit to how far you can scale: probably to quantum computations involving a few hundred qubits at most. Whereas once you have error-correction, at least in principle the sky’s the limit.

Q: You used the Wright brothers as an example where an airplane that was not practically useful itself pioneered the path to useful flight. In what sense to the sampling experiments of Google and USTC also pioneer that path for what we need for the future of quantum computing, or to what extent do you see them as niche examples of quantum supremacy?

SCOTT: I think the majority of what Google did, in integrating 53 superconducting qubits, making them programmable, etc. – and especially in characterizing the noise in a system at that scale – will be directly useful going forward. Indeed, that’s a large part of why they did it! Likewise, a lot of what USTC did, in integrating hundreds of beamsplitters, photon sources, and photodetectors, could be directly relevant to building a universal optical quantum computer. On the other hand, it’s true that both groups took shortcuts with the immediate goal of quantum supremacy in mind. As an example, Google’s chip uses a particular 2-qubit gate that was chosen, not because it shows up naturally in any application, but simply because it’s extra-hard to simulate using tensor network contraction algorithms, so it let them get to quantum supremacy faster than if they’d used a more conventional 2-qubit gate like the CNOT.

Q: To what extent does the measured circuit fidelity of 0.2% in the Google experiment limit the usability of this system for other computations?

SCOTT: Oh, we don’t know of anything particularly useful to do with Google’s Sycamore chip – that is, anything that you couldn’t do much more easily without it – other than
(1) quantum supremacy demonstrations,
(2) possibly the generation of cryptographically certified random bits, and
(3) of course, calibration experiments that tell you about the behavior of integrated superconducting qubits and thereby help you iterate to the next device.
But things are developing rapidly – the best circuit fidelity that was achievable in 2019 is not necessarily the best now, or the best that will be achieved in another year or two.

Update (May 25): Please, no new questions; just discussion of the existing questions and answers! I had hoped to get some work done today; I hadn’t planned on another ask-me-anything session. Thanks!

On turning 40 today

May 21st, 2021

Holy crap.

In case you’re wondering how I spent such a milestone of a day: well, I spent hours of it at an important virtual grant review meeting with the Department of Defense. Alas, when it came time for my own big presentation at that meeting—about what my students and I had done over the past five years to lay the theoretical foundations for the recent achievement of quantum computational supremacy—I’d uploaded the completely wrong PowerPoint file (it was something.pptx rather than something.ppt, where they weren’t two versions of the same presentation). Sorting this out took about 10 minutes, destroyed my momentum, and wasted everyone’s time. I partly blame the Microsoft Teams platform, whose limitations as conferencing software compared to Zoom necessitated emailing my presentation in the first place. But of course, part of the blame rests with me.

I had to explain apologetically to the US Department of Defense that I’m no good with tech stuff—being a mere computer science PhD. And unlike many of my colleagues (who I envy), back in my youth—for at age 40 I’m no longer young—I never had enough time to become both the kind of person who might earn a big grant to do quantum computing theory, and the kind of person who’d be minimally competent at the logistics of a review meeting for such a grant.


Forty years. Seven-eighths of those years, aware of the finiteness of the speed of light and of its value. Four-fifths of them, aware of the grislier details of the Holocaust. Three-quarters of them, aware of what it means to write code. Two-thirds of them, aware of polynomial versus exponential time. More than half of them trying to understand the capabilities and limitations of quantum computers as my day job. And then, rounding the corner, more than a third of the years writing this blog, a third of them being a professor, a quarter of them married, a fifth of them raising kids, a thirtieth of them in the midst of a global pandemic.

I didn’t even come close to achieving everything I hoped I would in my thirties. At least a half-dozen major papers, ones I expected would’ve been finished years ago (on the mixing of coffee and cream, on complexity and firewalls and AdS/CFT, on certified random numbers from sampling-based quantum supremacy experiments, on the implications of the Raz-Tal oracle separation, …), still need to be revised or even written. Other projects (e.g., the graphic novel about teaching math to Lily) were excitedly announced and then barely even started. I never wrote most of my promised blog post about the continuum hypothesis, or the one about Stephen Wolfram’s recrudescent claims of a unified theory of physics. And covid, which determined the world’s working conditions while we were running out the clock, turned out not to be a hyper-productive time for me. That’s how you know I’m not Newton (well, it’s the not the only way you know).

Anyway, during the runup to it, one’s 40th birthday feels like a temporal singularity, where you have to compress more and more of what you’d hoped to achieve before age 40 as you get closer and closer to it, because what the hell is there on the other side? They‘re over-40 and hence “old”; you’re under-40 and hence still “young.”

OK, but here I am on the other side right now, the “old” side, and I’m still here, still thinking and writing and feeling fairly continuous with my pre-singularity embodiment! And so far, in 16 hours on this side, the most senile thing I’ve done has been to email the wrong file attachment and thereby ruin an important funding presenta… you know what, let’s not even go there.

If you feel compelled to give me a 40th birthday present, then just make it a comment on this post, as short or long as you like, about what anything I said or did meant for you. I’m a total softie for that stuff.

What I told my kids

May 15th, 2021

You’ll hear that it’s not as simple as the Israelis are good guys and Palestinians are bad guys, or vice versa. And that’s true.

But it’s also not so complicated that there are no clearly identifiable good guys or bad guys. It’s just that they cut across the sides.

The good guys are anyone, on either side, whose ideal end state is two countries, Israel and Palestine, living side by side in peace.

The bad guys are anyone, on either side, whose ideal end state is the other side being, if not outright exterminated, then expelled from its current main population centers (ones where it’s been for several generations or more) and forcibly resettled someplace far away.

(And those whose ideal end state is everyone living together with no border — possibly as part of the general abolition of nation-states? They’re not bad guys; they can plead insanity. [Update: See here for clarifications!])

Hamas are bad guys. They fire rockets indiscriminately at population centers, hoping to kill as many civilians as they can. (Unfortunately for them and fortunately for Israel, they’re not great at that, and also they’re aiming at a target that’s world-historically good at defending itself.)

The IDF, whatever else you say about it, sends evacuation warnings to civilians before it strikes the missile centers that are embedded where they live. Even if Hamas could aim its missiles, the idea of it extending the same courtesy to Israeli civilians is black comedy.

Netanyahu is not as bad as Hamas, because he has the power to kill millions of Palestinians and yet kills only hundreds … whereas if Hamas had the power to kill all Jews, it told the world in its charter that it would immediately do so, and it’s acted consistently with its word.

(An aside: I’m convinced that Hamas has the most top-heavy management structure of any organization in the world. Every day, Israel takes out another dozen of its most senior, highest-level commanders, apparently leaving hundreds more. How many senior commanders do they have? Do they have even a single junior commander?)

Anyway, not being as bad as Hamas is an extremely low bar, and Netanyahu is a thoroughly bad guy. He’s corrupt and power-mad. Like Trump, he winks at his side’s monstrous extremists without taking moral responsibility for them. And if it were ever possible to believe that he wanted two countries as the ideal end state, it hasn’t been possible to believe that for at least a decade.

Netanyahu and Hamas are allies, not enemies. Both now blatantly, obviously rely on the other to stay in power, to demonstrate their worldview and thereby beat their internal adversaries.

Whenever you see anyone opine about this conflict, on Facebook or Twitter or in an op-ed or anywhere else, keep your focus relentlessly on the question of what that person wants, of what they’d do if they had unlimited power. If they’re a Zionist who talks about how “there’s no such place as Palestine,” how it’s a newly invented political construct: OK then, does that mean they’d relocate the 5 million self-described Palestinians to Jordan? Or where? If, on the other side, someone keeps talking about the “Zionist occupation,” always leaving it strategically unspecified whether they mean just the West Bank and parts of East Jerusalem or also Tel Aviv and Haifa, if they talk about the Nakba (catastrophe) of Israel’s creation in 1947 … OK then, what’s to be done with the 7 million Jews now living there? Should they go back to the European countries that murdered their families, or the Arab countries that expelled them? Should the US take them all? Out with it!

Don’t let them dodge the question. Don’t let them change the subject to something they’d much rather talk about, like the details of the other side’s latest outrage. Those details always seem so important, and yet everyone’s stance on every specific outrage is like 80% predictable if you know their desired end state. So just keep asking directly about their desired end state.

If, like me, you favor two countries living in peace, then you need never fear anyone asking you the same thing. You can then shout your desired end state from the rooftops, leaving unsettled only the admittedly-difficult “engineering problem” of how to get there. Crucially, whatever their disagreements or rivalries, everyone trying to solve the same engineering problem is in a certain sense part of the same team. At least, there’s rarely any reason to kill someone trying to solve the same problem that you are.

“What is this person’s ideal end state?” Just keep asking that and there’s a limit to how wrong you can ever be about this. You can still make factual mistakes, but it’s then almost impossible to make a moral mistake.

Three updates

May 10th, 2021
  1. For those who read my reply to Richard Borcherds on “teapot supremacy”: seeking better data, I ordered a dozen terra cotta flowerpots, and smashed eight of them on my driveway with my 4-year-old son, dropping each one from approximately 2 meters. For each flowerpot, we counted how many pieces it broke into, seeking insight about the distribution over that number. Unfortunately, it still proved nearly impossible to get good data, for a reason commenters had already warned me about: namely, there were typically 5-10 largeish shards, followed by “long tail” of smaller and smaller shards (eventually, just terra cotta specks), with no obvious place to draw the line and stop counting. Nevertheless, when I attempted to count only the shards that were “fingernail-length or larger,” here’s what I got: 1 pot with 9 shards, 1 with 11 shards, 2 with 13 shards, 2 with 15 shards, 1 with 17 shards, 1 with 19 shards. This is a beautiful (too beautiful?) symmetric distribution centered around a mean of 14 shards, although it’s anyone’s guess whether it approximates a Gaussian or something else. I have no idea why every pot broke into an odd number of shards, unless of course it was a 1-in-256-level fluke, or some cognitive bias that made me preferentially stop counting the shards at odd numbers.
  2. Thanks so much to everyone who congratulated me for the ACM Prize, and especially those who (per my request) suggested charities to which to give bits of the proceeds! Tonight, after going through the complete list of suggestions, I made my first, but far from last, round of donations: $1000 each to the Deworm the World Initiative, GiveDirectly, the World Wildlife Fund, the Nature Conservancy, and Canada/USA Mathcamp (which had a huge impact on me when I attended it as a 15-year-old). One constraint, which might never arise in a decade of moral philosophy seminars, ended up being especially important in practice: if the donation form was confusing or buggy, or if it wouldn’t accept my donation without some onerous confirmation step involving a no-longer-in-use cellphone, then I simply moved on to the next charity.
  3. Bobby Kleinberg asked me to advertise the call for nominations for the brand-new STOC Test of Time Award. The nomination deadline is coming up soon: May 24.

The easiest exercise in the moral philosophy book

April 25th, 2021

Peter Singer, in the parable that came to represent his whole worldview and that of the effective altruism movement more generally, asked us to imagine that we could save a drowning child at the cost of jumping into a lake and ruining an expensive new suit. Assuming we’d do that, he argued that we do in fact face an ethically equivalent choice; if we don’t donate most of our income to save children in the Third World, then we need to answer for why, as surely as the person who walked past the kid thrashing in the water.

In this post, I don’t want to take a position on Singer’s difficult but important hypothetical. I merely want to say: suppose that to save the child, you didn’t even have to jump in the water. Suppose you just had to toss a life preserver, one you weren’t using. Or suppose you just had to assure the child that it was OK to grab your life raft that was already in the water.

That, it seems, is the situation that the US and other rich countries will increasingly face with covid vaccines. What’s happening in India right now looks on track to become a humanitarian tragedy, if it isn’t already. Even if, as Indian friends tell me, this was a staggering failure of the Modi government, people shouldn’t pay for it with their lives. And we in the US now have tens of millions of vaccine doses sitting in warehouses unused, for regulatory and vaccine hesitancy reasons—stupidly, but we do. We’re past the time, in my opinion, when it’s morally obligatory either to use the doses or to give them away. Anyone in a position to manufacture more vaccines for distribution to poor countries, should also immediately get the intellectual property rights to do so.

I was glad to read, just this weekend, that the US is finally starting to move in the right direction. I hope it moves faster.

And I’m sorry that this brief post doesn’t contain any information or insight that you can’t find elsewhere. It just made me feel better to write it, is all.

Doubts about teapot supremacy: my reply to Richard Borcherds

April 20th, 2021

Richard Borcherds is a British mathematician at Berkeley, who won the 1998 Fields Medal for the proof of the monstrous moonshine conjecture among many other contributions. A couple months ago, Borcherds posted on YouTube a self-described “rant” about quantum computing, which was recently making the rounds on Facebook and which I found highly entertaining.

Borcherds points out that the term “quantum supremacy” means only that quantum computers can outperform existing classical computers on some benchmark, which can be chosen to show maximum advantage for the quantum computer. He allows that BosonSampling could have some value, for example in calibrating quantum computers or in comparing one quantum computer to another, but he decries the popular conflation of quantum supremacy with the actual construction of a scalable quantum computer able (for example) to run Shor’s algorithm to break RSA.

Borcherds also proposes a “teapot test,” according to which any claim about quantum computers can be dismissed if an analogous claim would hold for a teapot (which he brandishes for the camera). For example, there are many claims to solve practical optimization and machine learning problems by “quantum/classical hybrid algorithms,” wherein a classical computer does most of the work but a quantum computer is somehow involved. Borcherds points out that, at least as things stand in early 2021, in most or all such cases, the classical computer could’ve probably done as well entirely on its own. So then if you put a teapot on top of your classical computer while it ran, you could equally say you used a “classical/teapot hybrid approach.”

Needless to say, Borcherds is correct about all of this. I’ve made similar points on this blog for 15 years, although less Britishly. I’m delighted to have such serious new firepower on the scoffing-at-QC-hype team.

I do, however, have one substantive disagreement. At one point, Borcherds argues that sampling-based quantum supremacy itself fails his teapot test. For consider the computational problem of predicting how many pieces a teapot will break into if it’s dropped on the ground. Clearly, he says, the teapot itself will outperform any simulation running on any existing classical computer at that task, and will therefore achieve “teapot supremacy.” But who cares??

I’m glad that Borcherds has set out, rather crisply, an objection that’s been put to me many times over the past decade. The response is simple: I don’t believe the teapot really does achieve teapot supremacy on the stated task! At the least, I’d need to be shown why. You can’t just assert it without serious argument.

If we want to mirror the existing quantum supremacy experiments, then the teapot computational problem, properly formulated, should be: given as input a description of a teapot’s construction, the height from which it’s dropped, etc., output a sample from the probability distribution over the number of shards that the teapot will break into when it hits the floor.

If so, though, then clearly a classical computer can easily sample from the same distribution! Why? Because presumably we agree that there’s a negligible probability of more than (say) 1000 shards. So the distribution is characterized by a list of at most 1000 probabilities, which can be estimated empirically (at the cost of a small warehouse of smashed teapots) and thereafter used to generate samples. In the plausible event that the distribution is (say) a Gaussian, it’s even easier: just estimate the mean and variance.

A couple days ago, I was curious what the distribution looked like, so I decided to order some teapots from Amazon and check. Unfortunately, real porcelain teapots are expensive, and it seemed vaguely horrific to order dozens (as would be needed to get reasonable data) for the sole purpose of smashing them on my driveway. So I hit on what seemed like a perfect solution: I ordered toy teapots, which were much smaller and cheaper. Alas, when my toy “porcelain” teapots arrived yesterday, they turned out (unsurprisingly in retrospect for a children’s toy) to be some sort of plastic or composite material, meaning that they didn’t break unless one propelled them downward forcefully. So, while I can report that they tended to break into one or two large pieces along with two or three smaller shards, I found it impossible to get better data. (There’s a reason why I became a theoretical computer scientist…)

The good news is that my 4-year-old son had an absolute blast smashing toy teapots with me on our driveway, while my 8-year-old daughter was thrilled to take the remaining, unbroken teapots for her dollhouse. I apologize if this fails to defy gender stereotypes.

Anyway, it might be retorted that it’s not good enough to sample from a probability distribution: what’s wanted, rather, is to calculate how many pieces this specific teapot will break into, given all the microscopic details of it and its environment. Aha, this brings us to a crucial conceptual point: in order for something to count as an “input” to a computer, you need to be able to set it freely. Certainly, at the least, you need to be able to measure and record the input in its entirety, so that someone trying to reproduce your computation on a standard silicon computer would know exactly which computation to do. You don’t get to claim computational supremacy based on a problem with secret inputs: that’s like failing someone on a math test without having fully told them the problems.

Ability to set and know the inputs is the key property that’s satisfied by Google’s quantum supremacy experiment, and to a lesser extent by the USTC BosonSampling experiment, but that’s not satisfied at all by the “smash a teapot on the floor” experiment. Or perhaps it’s better to say: influences on a computation that vary uncontrollably and chaotically, like gusts of air hitting the teapot as it falls to the floor, shouldn’t be called “inputs” at all; they’re simply noise sources. And what one does with noise sources is to try to estimate their distribution and average over them—but in that case, as I said, there’s no teapot supremacy.

A Facebook friend said to me: that’s well and good, but surely we could change Borcherds’s teapot experiment to address this worry? For example: add a computer-controlled lathe (or even a 3D printer), with which you can build a teapot in an arbitrary shape of your choice. Then consider the problem of sampling from the probability distribution over how many pieces that teapot will smash into, when it’s dropped from some standard height onto some standard surface. I replied that this is indeed more interesting—in fact, it already seems more like what engineers do in practice (still, sometimes!) when building wind tunnels, than like a silly reductio ad absurdum of quantum supremacy experiments. On the other hand, if you believe the Extended Church-Turing Thesis, then as long as your analog computer is governed by classical physics, it’s presumably inherently limited to an Avogadro’s number type speedup over a standard digital computer, whereas with a quantum computer, you’re limited only by the exponential dimensionality of Hilbert space, which seems more interesting.

Or maybe I’m wrong—in which case, I look forward to the first practical demonstration of teapot supremacy! Just like with quantum supremacy, though, it’s not enough to assert it; you need to … put the tea where your mouth is.

Update: On the suggestion of Ernest Davis, who I can now reveal as the Facebook friend mentioned above, I just ordered some terra cotta flower pots, which look cheap, easily smashable, and environmentally friendly, and which will hopefully be acceptable substitutes for porcelain teapots in a new experiment. (Not that my main arguments in this post hinge on the results of such an experiment! That’s the power of theory.)

Another Update: Some of you might enjoy John Horgan’s Scientific American column on reality vs. hype in quantum computing, based on conversations with me and with Terry Rudolph of PsiQuantum.

The ACM Prize thing

April 14th, 2021

Last week I got an email from Dina Katabi, my former MIT colleague, asking me to call her urgently. Am I in trouble? For what, though?? I haven’t even worked at MIT for five years!

Luckily, Dina only wanted to tell me that I’d been selected to receive the 2020 ACM Prize in Computing, a mid-career award founded in 2007 that comes with $250,000 from Infosys. Not the Turing Award but I’d happily take it! And I could even look back on 2020 fondly for something.

I was utterly humbled to see the list of past ACM Prize recipients, which includes amazing computer scientists I’ve been privileged to know and learn from (like Jon Kleinberg, Sanjeev Arora, and Dan Boneh) and others who I’ve admired from afar (like Daphne Koller, Jeff Dean and Sanjay Ghemawat of Google MapReduce, and David Silver of AlphaGo and AlphaZero).

I was even more humbled, later, to read my prize citation, which focuses on four things:

  1. The theoretical foundations of the sampling-based quantum supremacy experiments now being carried out (and in particular, my and Alex Arkhipov’s 2011 paper on BosonSampling);
  2. My and Avi Wigderson’s 2008 paper on the algebrization barrier in complexity theory;
  3. Work on the limitations of quantum computers (in particular, the 2002 quantum lower bound for the collision problem); and
  4. Public outreach about quantum computing, including through QCSD, popular talks and articles, and this blog.

I don’t know if I’m worthy of such a prize—but I know that if I am, then it’s mainly for work I did between roughly 2001 and 2012. This honor inspires me to want to be more like I was back then, when I was driven, non-jaded, and obsessed with figuring out the contours of BQP and efficient computation in the physical universe. It makes me want to justify the ACM’s faith in me.

I’m grateful to the committee and nominators, and more broadly, to the whole quantum computing and theoretical computer science communities—which I “joined” in some sense around age 16, and which were the first communities where I ever felt like I belonged. I’m grateful to the mentors who made me what I am, especially Chris Lynch, Bart Selman, Lov Grover, Umesh Vazirani, Avi Wigderson, and (if he’ll allow me to include him) John Preskill. I’m grateful to the slightly older quantum computer scientists who I looked up to and tried to emulate, like Dorit Aharonov, Andris Ambainis, Ronald de Wolf, and John Watrous. I’m grateful to my wonderful colleagues at UT Austin, in the CS department and beyond. I’m grateful to my students and postdocs, the pride of my professional life. I’m grateful, of course, to my wife, parents, and kids.

By coincidence, my last post was also about prizes to theoretical computer scientists—in that case, two prizes that attracted controversy because of the recipient’s (or would-be recipient’s) political actions or views. It would understate matters to point out that not everyone has always agreed with everything I’ve said on this blog. I’m ridiculously lucky, and I know it, that even living through this polarized and tumultuous era, I never felt forced to choose between academic success and the freedom to speak my conscience in public under my real name. If there’s been one constant in my public stands, I’d like to think that—inspired by memories of my own years as an unknown, awkward, self-conscious teenager—it’s been my determination to nurture and protect talented young scientists, whatever they look like and wherever they come from. And I’ve tried to live up to that ideal in real life, and I welcome anyone’s scrutiny as to how well I’ve done.

What should I do with the prize money? I confess that my first instinct was to donate it, in its entirety, to some suitable charity—specifically, something that would make all the strangers who’ve attacked me on Twitter, Reddit, and so forth over the years realize that I’m fundamentally a good person. But I was talked out of this plan by my family, who pointed out that
(1) in all likelihood, nothing will make online strangers stop hating me,
(2) in any case this seems like a poor basis for making decisions, and
(3) if I really want to give others a say in what to do with the winnings, then why not everyone who’s stood by me and supported me?

So, beloved commenters! Please mention your favorite charitable causes below, especially weird ones that I wouldn’t have heard of otherwise. If I support their values, I’ll make a small donation from my prize winnings. Or a larger donation, especially if you donate yourself and challenge me to match. Whatever’s left after I get tired of donating will probably go to my kids’ college fund.

Update: And by an amusing coincidence, today is apparently “World Quantum Day”! I hope your Quantum Day is as pleasant as mine (and stable and coherent).