Yet more mistakes in papers

August 10th, 2021

Amazing Update (Aug. 19): My former PhD student Daniel Grier tells me that he, Sergey Bravyi, and David Gosset have an arXiv preprint, from February, where they give a corrected proof of my and Andris Ambainis’s claim that any k-query quantum algorithm can be simulated by an O (N1-1/2k)-query classical randomized algorithm (albeit, not of our stronger statement, about a randomized algorithm to estimate any bounded low-degree real polynomial). The reason I hadn’t known about this is that they don’t mention it in the abstract of their paper (!!). But it’s right there in Theorem 5.


In my last post, I came down pretty hard on the blankfaces: people who relish their power to persist in easily-correctable errors, to the detriment of those subject to their authority. The sad truth, though, is that I don’t obviously do better than your average blankface in my ability to resist falsehoods on early encounter with them. As one of many examples that readers of this blog might know, I didn’t think covid seemed like a big deal in early February 2020—although by mid-to-late February 2020, I’d repented of my doofosity. If I have any tool with which to unblank my face, then it’s only my extreme self-consciousness when confronted with evidence of my own stupidities—the way I’ve trained myself over decades in science to see error-correction as a or even the fundamental virtue.

Which brings me to today’s post. Continuing what’s become a Shtetl-Optimized tradition—see here from 2014, here from 2016, here from 2017—I’m going to fess up to two serious mistakes in research papers on which I was a coauthor.


In 2015, Andris Ambainis and I had a STOC paper entitled Forrelation: A Problem that Optimally Separates Quantum from Classical Computing. We gave two main results there:

  1. A Ω((√N)/log(N)) lower bound on the randomized query complexity of my “Forrelation” problem, which was known to be solvable with only a single quantum query.
  2. A proposed way to take any k-query quantum algorithm that queries an N-bit string, and simulate it using only O(N1-1/2k) classical randomized queries.

Later, Bansal and Sinha and independently Sherstov, Storozhenko, and Wu showed that a k-query generalization of Forrelation, which I’d also defined, requires ~Ω(N1-1/2k) classical randomized queries, in line with my and Andris’s conjecture that k-fold Forrelation optimally separates quantum and classical query complexities.

A couple months ago, alas, my former grad school officemate Andrej Bogdanov, along with Tsun Ming Cheung and Krishnamoorthy Dinesh, emailed me and Andris to say that they’d discovered an error in result 2 of our paper (result 1, along with the Bansal-Sinha and Sherstov-Storozhenko-Wu extensions of it, remained fine). So, adding our own names, we’ve now posted a preprint on ECCC that explains the error, while also showing how to recover our result for the special case k=1: that is, any 1-query quantum algorithm really can be simulated using only O(√N) classical randomized queries.

Read the preprint if you really want to know the details of the error, but to summarize it in my words: Andris and I used a trick that we called “variable-splitting” to handle variables that have way more influence than average on the algorithm’s acceptance probability. Alas, variable-splitting fails to take care of a situation where there are a bunch of variables that are non-influential individually, but that on some unusual input string, can “conspire” in such a way that their signs all line up and their contribution overwhelms those from the other variables. A single mistaken inequality fooled us into thinking such cases were handled, but an explicit counterexample makes the issue obvious.

I still conjecture that my original guess was right: that is, I conjecture that any problem solvable with k quantum queries is solvable with O(N1-1/2k) classical randomized queries, so that k-fold Forrelation is the extremal example, and so that no problem has constant quantum query complexity but linear randomized query complexity. More strongly, I reiterate the conjecture that any bounded degree-d real polynomial, p:{0,1}N→[0,1], can be approximated by querying only O(N1-1/d) input bits drawn from some suitable distribution. But proving these conjectures, if they’re true, will require a new algorithmic idea.


Now for the second mea culpa. Earlier this year, my student Sabee Grewal and I posted a short preprint on the arXiv entitled Efficient Learning of Non-Interacting Fermion Distributions. In it, we claimed to give a classical algorithm for reconstructing any “free fermionic state” |ψ⟩—that is, a state of n identical fermionic particles, like electrons, each occupying one of m>n possible modes, that can be produced using only “fermionic beamsplitters” and no interaction terms—and for doing so in polynomial time and using a polynomial number of samples (i.e., measurements of where all the fermions are, given a copy of |ψ⟩). Alas, after trying to reply to confused comments from readers and reviewers (albeit, none of them exactly putting their finger on the problem), Sabee and I were able to figure out that we’d done no such thing.

Let me explain the error, since it’s actually really interesting. In our underlying problem, we’re trying to find a collection of unit vectors, call them |v1⟩,…,|vm⟩, in Cn. Here, again, n is the number of fermions and m>n is the number of modes. By measuring the “2-mode correlations” (i.e., the probability of finding a fermion in both mode i and mode j), we can figure out the approximate value of |⟨vi|vj⟩|—i.e., the absolute value of the inner product—for any i≠j. From that information, we want to recover |v1⟩,…,|vm⟩ themselves—or rather, their relative configuration in n-dimensional space, isometries being irrelevant.

It seemed to me and Sabee that, if we knew ⟨vi|vj⟩ for all i≠j, then we’d get linear equations that iteratively constrained each |vj⟩ in terms of ⟨vi|vj⟩ for j<i, so all we’d need to do is solve those linear systems, and then (crucially, and this was the main work we did) show that the solution would be robust with respect to small errors in our estimates of each ⟨vi|vj⟩. It seemed further to us that, while it was true that the measurements only revealed |⟨vi|vj⟩| rather than ⟨vi|vj⟩ itself, the “phase information” in ⟨vi|vj⟩ was manifestly irrelevant, as it in any case depended on the irrelevant global phases of |vi⟩ and |vj⟩ themselves.

Alas, it turns out that the phase information does matter. As an example, suppose I told you only the following about three unit vectors |u⟩,|v⟩,|w⟩ in R3:

|⟨u|v⟩| = |⟨u|w⟩| = |⟨v|w⟩| = 1/2.

Have I thereby determined these vectors up to isometry? Nope! In one class of solution, all three vectors belong to the same plane, like so:

|u⟩=(1,0,0),
|v⟩=(1/2,(√3)/2,0),
|w⟩=(-1/2,(√3)/2,0).

In a completely different class of solution, the three vectors don’t belong to the same plane, and instead look like three edges of a tetrahedron meeting at a vertex:

|u⟩=(1,0,0),
|v⟩=(1/2,(√3)/2,0),
|w⟩=(1/2,1/(2√3),√(2/3)).

These solutions correspond to different sign choices for |⟨u|v⟩|, |⟨u|w⟩|, and |⟨v|w⟩|—choices that collectively matter, even though each of them is individually irrelevant.

It follows that, even in the special case where the vectors are all real, the 2-mode correlations are not enough information to determine the vectors’ relative positions. (Well, it takes some more work to convert this to a counterexample that could actually arise in the fermion problem, but that work can be done.) And alas, the situation gets even gnarlier when, as for us, the vectors can be complex.

Any possible algorithm for our problem will have to solve a system of nonlinear equations (albeit, a massively overconstrained system that’s guaranteed to have a solution), and it will have to use 3-mode correlations (i.e., statistics of triples of fermions), and quite possibly 4-mode correlations and above.

But now comes the good news! Googling revealed that, for reasons having nothing to do with fermions or quantum physics, problems extremely close to ours had already been studied in classical machine learning. The key term here is “Determinantal Point Processes” (DPPs). A DPP is a model where you specify an m×m matrix A (typically symmetric or Hermitian), and then the probabilities of various events are given by the determinants of various principal minors of A. Which is precisely what happens with fermions! In terms of the vectors |v1⟩,…,|vm⟩ that I was talking about before, to make this connection we simply let A be the m×m covariance matrix, whose (i,j) entry equals ⟨vi|vj⟩.

I first learned of this remarkable correspondence between fermions and DPPs a decade ago, from a talk on DPPs that Ben Taskar gave at MIT. Immediately after the talk, I made a mental note that Taskar was a rising star in theoretical machine learning, and that his work would probably be relevant to me in the future. While researching this summer, I was devastated to learn that Taskar died of heart failure in 2013, in his mid-30s and only a couple of years after I’d heard him speak.

The most relevant paper for me and Sabee was called An Efficient Algorithm for the Symmetric Principal Minor Assignment Problem, by Rising, Kulesza, and Taskar. Using a combinatorial algorithm based on minimum spanning trees and chordless cycles, this paper nearly solves our problem, except for two minor details:

  1. It doesn’t do an error analysis, and
  2. It considers complex symmetric matrices, whereas our matrix A is Hermitian (i.e., it equals its conjugate transpose, not its transpose).

So I decided to email Alex Kulezsa, one of Taskar’s surviving collaborators who’s now a research scientist at Google NYC, to ask his thoughts about the Hermitian case. Alex kindly replied that they’d been meaning to study that case—a reviewer had even asked about it!—but they’d ran into difficulties and didn’t know what it was good for. I asked Alex whether he’d like to join forces with me and Sabee in tackling the Hermitian case, which (I told him) was enormously relevant in quantum physics. To my surprise and delight, Alex agreed.

So we’ve been working on the problem together, making progress, and I’m optimistic that we’ll have some nice result. By using the 3-mode correlations, at least “generically” we can recover the entries of the matrix A up to complex conjugation, but further ideas will be needed to resolve the complex conjugation ambiguity, to whatever extent it actually matters.

In short: on the negative side, there’s much more to the problem of learning a fermionic state than we’d realized. But on the positive side, there’s much more to the problem than we’d realized! As with the simulation of k-query quantum algorithms, my coauthors and I would welcome any ideas. And I apologize to anyone who was misled by our premature (and hereby retracted) claims.


Update (Aug. 11): Here’s a third bonus retraction, which I thank my colleague Mark Wilde for bringing to my attention. Way back in 2005, in my NP-complete Problems and Physical Reality survey article, I “left it as an exercise for the reader” to prove that BQPCTC, or quantum polynomial time augmented with Deutschian closed timelike curves, is contained in a complexity class called SQG (Short Quantum Games). While it turns out to be true that BQPCTC ⊆ SQG—as follows from my and Watrous’s 2008 result that BQPCTC = PSPACE, combined with Gutoski and Wu’s 2010 result that SQG = PSPACE—it’s not something for which I could possibly have had a correct proof back in 2005. I.e., it was a harder exercise than I’d intended!

On blankfaces

August 2nd, 2021

For years, I’ve had a private term I’ve used with my family. To give a few examples of its use:

No, I never applied for that grant. I spent two hours struggling to log in to a web portal designed by the world’s top blankfaces until I finally gave up in despair.

No, I paid for that whole lecture trip out of pocket; I never got the reimbursement they promised. Their blankface administrator just kept sending me back the form, demanding more and more convoluted bank details, until I finally got the hint and dropped it.

No, my daughter Lily isn’t allowed in the swimming pool there. She easily passed their swim test last year, but this year the blankface lifeguard made up a new rule on the spot that she needs to retake the test, so Lily took it again and passed even more easily, but then the lifeguard said she didn’t like the stroke Lily used, so she failed her and didn’t let her retake it. I complained to their blankface athletic director, who launched an ‘investigation.’ The outcome of the ‘investigation’ was that, regardless of the ground truth about how well Lily can swim, their blankface lifeguard said she’s not allowed in the pool, so being blankfaces themselves, they’re going to stand with the lifeguard.

Yeah, the kids spend the entire day indoors, breathing each other’s stale, unventilated air, then they finally go outside and they aren’t allowed on the playground equipment, because of the covid risk from them touching it. Even though we’ve known for more than a year that covid is an airborne disease. Everyone I’ve talked there agrees that I have a point, but they say their hands are tied. I haven’t yet located the blankface who actually made this decision and stands by it.

What exactly is a blankface? He or she is often a mid-level bureaucrat, but not every bureaucrat is a blankface, and not every blankface is a bureaucrat. A blankface is anyone who enjoys wielding the power entrusted in them to make others miserable by acting like a cog in a broken machine, rather than like a human being with courage, judgment, and responsibility for their actions. A blankface meets every appeal to facts, logic, and plain compassion with the same repetition of rules and regulations and the same blank stare—a blank stare that, more often than not, conceals a contemptuous smile.

The longer I live, the more I see blankfacedness as one of the fundamental evils of the human condition. Yes, it contains large elements of stupidity, incuriosity, malevolence, and bureaucratic indifference, but it’s not reducible to any of those. After enough experience, the first two questions you ask about any organization are:

  1. Who are the blankfaces here?
  2. Who are the people I can talk with to get around the blankfaces?

As far as I can tell, blankfacedness cuts straight across conventional political ideology, gender, and race. (Age, too, except that I’ve never once encountered a blankfaced child.) Brilliance and creativity do seem to offer some protection against blankfacedness—possibly because the smarter you are, the harder it is to justify idiotic rules to yourself—but even there, the protection is far from complete.


Twenty years ago, all the conformists in my age cohort were obsessed with the Harry Potter books and movies—holding parties where they wore wizard costumes, etc. I decided that the Harry Potter phenomenon was a sort of collective insanity: from what I could tell, the stories seemed like startlingly puerile and unoriginal mass-marketed wish-fulfillment fantasies.

Today, those same conformists in my age cohort are more likely to condemn the Harry Potter series as Problematically white, male, and cisnormative, and J. K. Rowling herself as a monstrous bigot whose acquaintances’ acquaintances should be shunned. Naturally, then, there was nothing for me to do but finally read the series! My 8-year-old daughter Lily and I have been partner-reading it for half a year; we’re just finishing book 5. (After we’ve finished the series, we might start on Harry Potter and the Methods of Rationality … which, I confess, I’ve also never read.)

From book 5, I learned something extremely interesting. The most despicable villain in the Harry Potter universe is not Lord Voldemort, who’s mostly just a faraway cipher and abstract embodiment of pure evil, no more hateable than an earthquake. Rather, it’s Dolores Jane Umbridge, the toadlike Ministry of Magic bureaucrat who takes over Hogwarts school, forces out Dumbledore as headmaster, and terrorizes the students with increasingly draconian “Educational Decrees.” Umbridge’s decrees are mostly aimed at punishing Harry Potter and his friends, who’ve embarrassed the Ministry by telling everyone the truth that Voldemort has returned and by readying themselves to fight him, thereby defying the Ministry’s head-in-the-sand policy.

Anyway, I’ll say this for Harry Potter: Rowling’s portrayal of Umbridge is so spot-on and merciless that, for anyone who knows the series, I could simply define a blankface to be anyone sufficiently Umbridge-like.


This week I also finished reading The Premonition, the thrilling account of the runup to covid by Michael Lewis (who also wrote The Big Short, Moneyball, etc). Lewis tells the stories of a few individuals scattered across US health and government bureaucracies who figured out over the past 20 years that the US was breathtakingly unprepared for a pandemic, and who struggled against official indifference, mostly unsuccessfully, to try to fix that. As covid hit the US in early 2020, these same individuals frantically tried to pull the fire alarms, even as the Trump White House, the CDC, and state bureaucrats all did everything in their power to block and sideline them. We all know the results.

It’s no surprise that, in Lewis’s telling, Trump and his goons come in for world-historic blame: however terrible you thought they were, they were worse. It seems that John Bolton, in particular, gleefully took an ax to everything the two previous administrations had done to try to prepare the federal government for pandemics—after Tom Bossert, the one guy in Trump’s inner circle who’d actually taken pandemic preparation seriously, was forced out for contradicting Trump about Russia and Ukraine.

But the left isn’t spared either. The most compelling character in The Premonition is Charity Dean, who escaped from the Christian fundamentalist sect in which she was raised to put herself through medical school and become a crusading public-health officer for Santa Barbara County. Lewis relates with relish how, again and again, Dean startled the bureaucrats around her by taking matters into her own hands in her war against pathogens—e.g., slicing into a cadaver herself to take samples when the people whose job it was wouldn’t do it.

In 2019, Dean moved to Sacramento to become California’s next chief public health officer, but then Governor Gavin Newsom blocked her expected promotion, instead recruiting someone from the outside named Sonia Angell, who had no infectious disease experience but to whom Dean would have to report. Lewis reports the following as the reason:

“It was an optics problem,” says a senior official in the Department of Health and Human Services. “Charity was too young, too blond, too Barbie. They wanted a person of color.” Sonia Angell identified as Latina.

After it became obvious that the White House and the CDC were both asleep at the wheel, the competent experts’ Plan B was to get California to set a national standard, one that would shame all the other states into acting, by telling the truth about covid and by aggressively testing, tracing, and isolating. And here comes the tragedy: Charity Dean spent from mid-January till mid-March trying to do exactly that, and Sonia Angell blocked her. Angell—who comes across as a real-life Dolores Umbridge—banned Dean from using the word “pandemic,” screamed at her for her insubordination, and systematically shut her out of meetings. Angell’s stated view was that, until and unless the CDC said that there was a pandemic, there was no pandemic—regardless of what hospitals across California might be reporting to the contrary.

As it happens, California was the first state to move aggressively against covid, on March 19—basically because as the bodies started piling up, Dean and her allies finally managed to maneuver around Angell and get the ear of Governor Newsom directly. Had the response started earlier, the US might have had an outcome more in line with most industrialized countries. Half of the 630,000 dead Americans might now be alive.

Sonia Angell fully deserves to have her name immortalized by history as one of the blankest of blankfaces. But of course, Angell was far from alone. Robert Redfield, Trump’s CDC director, was a blankface extraordinaire. Nancy Messonnier, who lied to stay in Trump’s good graces, was a blankface too. The entire CDC and FDA seem to have teemed with blankfaces. As for Anthony Fauci, he became a national hero, maybe even deservedly so, merely by not being 100% a blankface, when basically every other “expert” in the US with visible power was. Fauci cleared a depressingly low bar, one that the people profiled by Lewis cleared at Simone-Biles-like heights.

In March 2020, the fundamental question I had was: where are the supercompetent rule-breaking American heroes from the disaster movies? What’s taking them so long? The Premonition satisfyingly answers that question. It turns out that the heroes did exist, scattered across the American health bureaucracy. They were screaming at the top of their lungs. But they were outvoted by the critical mass of blankfaces that’s become one of my country’s defining features.


Some people will object that the term “blankface” is dehumanizing. The reason I disagree is that a blankface is someone who freely chose to dehumanize themselves: to abdicate their human responsibility to see what’s right in front of them, to act like malfunctioning pieces of electronics even though they, like all of us, were born with the capacity for empathy and reason.

With many other human evils and failings, I have a strong inclination toward mercy, because I understand how someone could’ve succumbed to the temptation—indeed, I worry that I myself might’ve succumbed to it “but for the grace of God.” But here’s the thing about blankfaces: in all my thousands of dealings with them, not once was I ever given cause to wonder whether I might have done the same in their shoes. It’s like, of course I wouldn’t have! Even if I were forced (by my own higher-ups, an intransigent computer system, or whatever else) to foist some bureaucratic horribleness on an innocent victim, I’d be sheepish and apologetic about it. I’d acknowledge the farcical absurdity of what I was making the other person do, or declaring that they couldn’t do. Likewise, even if I were useless in a crisis, at least I’d get out of the way of the people trying to solve it. How could I live with myself otherwise?

The fundamental mystery of the blankfaces, then, is how they can be so alien and yet so common.


Update (Aug. 3): Surprisingly many people seem to have read this post, and come away with the notion that a “blankface” is simply anyone who’s a stickler for rules and formalized procedures. They’ve then tried to refute me with examples of where it’s good to be a stickler, or where I in particular would believe that it’s good.

But no, that’s not it at all.

Rules can be either good or bad. All things considered, I’d probably rather be on a plane piloted by a robotic stickler for safety rules, than by someone who ignored the rules at his or her discretion. And as I said in the post, in the first months of covid, it was ironically the anti-blankfaces who were screaming for rules, regulations, and lockdowns; the blankfaces wanted to continue as though nothing had changed!

Also, “blankface” (just like “homophobe” or “antisemite”) is a serious accusation. I’d never call anyone a blankface merely for sticking with a defensible rule when it turned out, in hindsight, that the rule could’ve been relaxed.

Here’s how to tell a blankface: suppose you see someone enforcing or interpreting a rule in a way that strikes you as obviously absurd. And suppose you point it out to them.

Do they say “I disagree, here’s why it actually does make sense”? They might be mistaken but they’re not a blankface.

Do they say “tell me about it, it makes zero sense, but it’s above my pay grade to change”? You might wish they were more dogged or courageous but again they’re not a blankface.

Or do they ignore all your arguments and just restate the original rule—seemingly angered by what they understood as a challenge to their authority, and delighted to reassert it? That’s the blankface.

Striking new Beeping Busy Beaver champion

July 27th, 2021

For the past few days, I was bummed about the sooner-than-expected loss of Steven Weinberg. Even after putting up my post, I spent hours just watching old interviews with Steve on YouTube and reading his old essays for gems of insight that I’d missed. (Someday, I’ll tackle Steve’s celebrated quantum field theory and general relativity textbooks … but that day is not today.)

Looking for something to cheer me up, I was delighted when Shtetl-Optimized reader Nick Drozd reported a significant new discovery in BusyBeaverology—one that, I’m proud to say, was directly inspired by my Busy Beaver survey article from last summer (see here for blog post).

Recall that BB(n), the nth Busy Beaver number (technically, “Busy Beaver shift number”), is defined as the maximum number of steps that an n-state Turing machine, with 1 tape and 2 symbols, can make on an initially all-0 tape before it invokes a Halt transition. Famously, BB(n) is not only uncomputable, it grows faster than any computable function of n—indeed, computing anything that grows as quickly as Busy Beaver is equivalent to solving the halting problem.

As of 2021, here is the extent of human knowledge about concrete values of this function:

  • BB(1) = 1 (trivial)
  • BB(2) = 6 (Lin 1963)
  • BB(3) = 21 (Lin 1963)
  • BB(4) = 107 (Brady 1983)
  • BB(5) ≥ 47,176,870 (Marxen and Buntrock 1990)
  • BB(6) > 7.4 × 1036,534 (Kropitz 2010)
  • BB(7) > 102×10^10^10^18,705,352 (“Wythagoras” 2014)

As you can see, the function is reasonably under control for n≤4, then “achieves liftoff” at n=5.

In my survey, inspired by a suggestion of Harvey Friedman, I defined a variant called Beeping Busy Beaver, or BBB. Define a beeping Turing machine to be a TM that has a single designated state where it emits a “beep.” The beeping number of such a machine M, denoted b(M), is the largest t such that M beeps on step t, or ∞ if there’s no finite maximum. Then BBB(n) is the largest finite value of b(M), among all n-state machines M.

I noted that the BBB function grows uncomputably even given an oracle for the ordinary BB function. In fact, computing anything that grows as quickly as BBB is equivalent to solving any problem in the second level of the arithmetical hierarchy (where the computable functions are in the zeroth level, and the halting problem is in the first level). Which means that pinning down the first few values of BBB should be even more breathtakingly fun than doing the same for BB!

In my survey, I noted the following four concrete results:

  • BBB(1) = 1 = BB(1)
  • BBB(2) = 6 = BB(2)
  • BBB(3) ≥ 55 > 21 = BB(3)
  • BBB(4) ≥ 2,819 > 107 = BB(4)

The first three of these, I managed to get on my own, with the help of a little program I wrote. The fourth one was communicated to me by Nick Drozd even before I finished my survey.

So as of last summer, we knew that BBB coincides with the ordinary Busy Beaver function for n=1 and n=2, then breaks away starting at n=3. We didn’t know how quickly BBB “achieves liftoff.”

But Nick continued plugging away at the problem all year, and he now claims to have resolved the question. More concretely, he claims the following two results:

  • BBB(3) = 55 (via exhaustive enumeration of cases)
  • BBB(4) ≥ 32,779,478 (via a newly-discovered machine)

For more, see Nick’s announcement on the Foundations of Mathematics email list, or his own blog post.

Nick actually writes in terms of yet another Busy Beaver variant, which he calls BLB, or “Blanking Beaver.” He defines BLB(n) to be the maximum finite number of steps that an n-state Turing machine can take before it first “wipes its tape clean”—that is, sets all the tape squares to 0, as they were at the very beginning of the computation, but as they were not at intermediate times. Nick has discovered a 4-state machine that takes 32,779,477 steps to blank out its tape, thereby proving that

  • BLB(4) ≥ 32,779,477.

Nick’s construction, when investigated, turns out to be based on a “Collatz-like” iterative process—exactly like the BB(5) champion and most of the other strong Busy Beaver contenders currently known. A simple modification of his construction yields the lower bound on BBB.

Note that the Blanking Beaver function does not have the same sort of super-uncomputable growth that Beeping Busy Beaver has: it merely grows “normally” uncomputably fast, like the original BB function did. Yet we see that BLB, just like BBB, already “achieves liftoff” by n=4, rather than n=5. So the real lesson here is that 4-state Turing machines can already do fantastically complicated things on blank tapes. It’s just that the usual definitions of the BB function artificially prevent us from seeing that; they hide the uncomputable insanity until we get to 5 states.

Steven Weinberg (1933-2021): a personal view

July 24th, 2021

Steven Weinberg sitting in front of a chalkboard covered in equations

Steven Weinberg was, perhaps, the last truly towering figure of 20th-century physics. In 1967, he wrote a 3-page paper saying in effect that as far as he could see, two of the four fundamental forces of the universe—namely, electromagnetism and the weak nuclear force—had actually been the same force until a tiny fraction of a second after the Big Bang, when a broken symmetry caused them to decouple. Strangely, he had developed the math underlying this idea for the strong nuclear force, and it didn’t work there, but it did seem to work for the weak force and electromagnetism. Steve noted that, if true, this would require the existence of two force-carrying particles that hadn’t yet been seen — the W and Z bosons — and would also require the existence of the famous Higgs boson.

By 1979, enough of this picture had been confirmed by experiment that Steve shared the Nobel Prize in Physics with Sheldon Glashow—Steve’s former high-school classmate—as well as with Abdus Salam, both of whom had separately developed pieces of the same puzzle. As arguably the central architect of what we now call the Standard Model of elementary particles, Steve was in the ultra-rarefied class where, had he not won the Nobel Prize, it would’ve been a stain on the prize rather than on him.

Steve once recounted in my hearing that Richard Feynman initially heaped scorn on the electroweak proposal. Late one night, however, Steve was woken up by a phone call. It was Feynman. “I believe your theory now,” Feynman announced. “Why?” Steve asked. Feynman, being Feynman, gave some idiosyncratic reason that he’d worked out for himself.

It used to happen more often that someone would put forward a bold new proposal about the most fundamental laws of nature … and then the experimentalists would actually go out and confirm it. Besides with the Standard Model, though, there’s approximately one other time that that’s happened in the living memory of most of today’s physicists. Namely, when astronomers discovered in 1998 that the expansion of the universe was accelerating, apparently due to a dark energy that behaved like Einstein’s long-ago-rejected cosmological constant. Very few had expected such a result. There was one prominent exception, though: Steve Weinberg had written in 1987 that he saw no reason why the cosmological constant shouldn’t take a nonzero value that was still tiny enough to be consistent with galaxy formation and so forth.


In his long and illustrious career, one of the least important things Steve did, six years ago, was to play a major role in recruiting me and my wife Dana to UT Austin. The first time I met Steve, his first question to me was “have we met before? you look familiar.” It turns out that he’d met my dad, Steve Aaronson, way back in the 1970s, when my dad (then a young science writer) had interviewed Weinberg for a magazine article. I was astonished that Weinberg would remember such a thing across decades.

Steve was then gracious enough to take me, Dana, and both of my parents out to dinner in Austin as part of my and Dana’s recruiting trip.

We talked, among other things, about Telluride House at Cornell, where Steve had lived as an undergrad in the early 1950s and where I’d lived as an undergrad almost half a century later. Steve said that, while he loved the intellectual atmosphere at Telluride, he tried to have as little to do as possible with the “self-government” aspect, since he found the political squabbles that convulsed many of the humanities majors there to be a waste of time. I burst out laughing, because … well, imagine you got to have dinner with James Clerk Maxwell, and he opened up about some ridiculously specific pet peeve from his college years, and it was your ridiculously specific pet peeve from your college years.

(Steve claimed to us, not entirely convincingly, that he was a mediocre student at Cornell, more interested in “necking” with his fellow student and future wife Louise than in studying physics.)

After Dana and I came to Austin, Steve was kind enough to invite me to the high-energy theoretical physics lunches, where I chatted with him and the other members of his group every week (or better yet, simply listened). I’d usually walk to the faculty club ten minutes early. Steve, having arrived by car, would be sitting alone in an armchair, reading a newspaper, while he waited for the other physicists to arrive by foot. No matter how scorching the Texas sun, Steve would always be wearing a suit (usually a tan one) and a necktie, his walking-cane by his side. I, typically in ratty shorts and t-shirt, would sit in the armchair next to him, and we’d talk—about the latest developments in quantum computing and information (Steve, a perpetual student, would pepper me with questions), or his recent work on nonlinear modifications of quantum mechanics, or his memories of Cambridge, MA, or climate change or the anti-Israel protests in Austin or whatever else. These conversations, brief and inconsequential as they probably were to him, were highlights of my week.

There was, of course, something a little melancholy about getting to know such a great man only in the twilight of his life. To be clear, Steve Weinberg in his mid-to-late 80s was far more cogent, articulate, and quick to understand what was said to him than just about anyone you’d ever met in their prime. But then, after a short conversation, he’d have to leave for a nap. Steve was as clear-eyed and direct about his age and impending mortality as he was about everything else. “Scott!” he once greeted me. “I just saw the announcement for your physics colloquium about quantum supremacy. I hope I’m still alive next month to attend it.”

(As it happens, the colloquium in question was on November 9, 2016, the day we learned that Trump would become president. I offered to postpone the talk, since no one could concentrate on physics on such a day. While several of the physicists agreed that that was the right call, Steve convinced me to go ahead with the following message: “I sympathize, but I do want to hear you … There is some virtue in just plowing on.”)

I sometimes felt, as well, like I was speaking with Steve across a cultural chasm even greater than the half-century that separated us in age. Steve enjoyed nothing more than to discourse at length, in his booming New-York-accented baritone, about opera, or ballet, or obscure corners of 18th-century history. It would be easy to feel like a total philistine by comparison … and I did. Steve also told me that he never reads blogs or other social media, since he’s unable believe any written work is “real” unless it’s published, ideally on paper. I could only envy such an attitude.


If you did try to judge by the social media that he never read, you might conclude that Steve would be remembered by the wider world less for any of his epochal contributions to physics than for a single viral quote of his:

With or without religion, good people can behave well and bad people can do evil; but for good people to do evil — that takes religion.

I can testify that Steve fully lived his atheism. Four years ago, I invited him (along with many other UT colleagues) to the brit milah of my newborn son Daniel. Steve said he’d be happy to come over our house another time (and I’m happy to say that he did a year later), but not to witness any body parts being cut.

Despite his hostility to Judaism—along with every other religion—Steve was a vociferous supporter of the state of Israel, almost to the point of making me look like Edward Said or Noam Chomsky. For Steve, Zionism was not in spite of his liberal, universalist Enlightenment ideals but because of them.

Anyway, there’s no need even to wonder whether Steve had any sort of deathbed conversion. He’d laugh at the thought.


In 2016, Steve published To Explain the World, a history of human progress in physics and astronomy from the ancient Greeks to Newton (when, Steve says, the scientific ethos reached the form that it still basically has today). It’s unlike any other history-of-science book that I’ve read. Of course I’d read other books about Aristarchus and Ptolemy and so forth, but I’d never read a modern writer treating them not as historical subjects, but as professional colleagues merely separated in time. Again and again, Steve would redo ancient calculations, finding errors that had escaped historical notice; he’d remark on how Eratosthenes or Kepler could’ve done better with the data available to them; he’d grade the ancients by how much of modern physics and cosmology they’d correctly anticipated.

To Explain the World was savaged in reviews by professional science historians. Apparently, Steve had committed the unforgivable sin of “Whig history”: that is, judging past natural philosophers by the standards of today. Steve clung to the naïve, debunked, scientistic notions that there’s such a thing as “actual right answers” about how the universe works; that we today are, at any rate, much closer to those right answers than the ancients were; and that we can judge the ancients by how close they got to the right answers that we now know.

As I read the sneering reviews, I kept thinking: so suppose Archimedes, Copernicus, and all the rest were brought back from the dead. Who would they rather talk to: historians seeking to explore every facet of their misconceptions, like anthropologists with a paleolithic tribe; or Steve Weinberg, who’d want to bring them up to speed as quickly as possible so they could continue the joint quest?


When it comes to the foundations of quantum mechanics, Steve took the view that no existing interpretation is satisfactory, although the Many-Worlds Interpretation is perhaps the least bad of the bunch. Steve felt that our reaction to this state of affairs should be to test quantum mechanics more precisely—for example, by looking for tiny nonlinearities in the Schrödinger equation, or other signs that QM itself is only a limit of some more all-encompassing theory. This is, to put it mildly, not a widely-held view among high-energy physicists—but it provided a fascinating glimpse into how Steve’s mind works.

Here was, empirically, the most successful theoretical physicist alive, and again and again, his response to conceptual confusion was not to ruminate more about basic principles but to ask for more data or do a more detailed calculation. He never, ever let go of a short tether to the actual testable consequences of whatever was being talked about, or future experiments that might change the situation.

(Steve worked on string theory in the early 1980s, and he remained engaged with it for the rest of his life, for example by recruiting the string theorists Jacques Distler and Willy Fischler to UT Austin. But he later soured on the prospects for getting testable consequences out of string theory within a reasonable timeframe. And he once complained to me that the papers he’d read about “It from Qubit,” AdS/CFT, and the black hole information problem had had “too many words and not enough equations.”)


Steve was, famously, about as hardcore a reductionist as has ever existed on earth. He was a reductionist not just in the usual sense that he believed there are fundamental laws of physics, from which, together with the initial conditions, everything that happens in our universe can be calculated in principle (if not in practice), at least probabilistically. He was a reductionist in the stronger sense that he thought the quest to discover the fundamental laws of the universe had a special pride of place among all human endeavors—a place not shared by the many sciences devoted to the study of complex emergent behavior, interesting and important though they might be.

This came through clearly in Steve’s critical review of Stephen Wolfram’s A New Kind of Science, where Steve (Weinberg, that is) articulated his views of why “free-floating” theories of complex behavior can’t take the place of a reductionistic description of our actual universe. (Of course, I was also highly critical of A New Kind of Science in my review, but for somewhat different reasons than Steve was.) Steve’s reductionism was also clearly expressed in his testimony to Congress in support of continued funding for the Superconducting Supercollider. (Famously, Phil Anderson testified against the SSC, arguing that the money would better be spent on condensed-matter physics and other sciences of emergent behavior. The result: Congress did cancel the SSC, and it redirected precisely zero of the money to other sciences. But at least Steve lived to see the LHC dramatically confirm the existence of the Higgs boson, as the SSC would have.)

I, of course, have devoted my career to theoretical computer science, which you might broadly call a “science of emergent behavior”: it tries to figure out the ultimate possibilities and limits of computation, taking the underlying laws of physics as given. Quantum computing, in particular, takes as its input a physical theory that was already known by 1926, and studies what can be done with it. So you might expect me to disagree passionately with Weinberg on reductionism versus holism.

In reality, I have a hard time pinpointing any substantive difference. Mostly I see a difference in opportunities: Steve saw a golden chance to contribute something to the millennia-old quest to discover the fundamental laws of nature, at the tail end of the heroic era of particle physics that culminated in what we now call the Standard Model. He was brilliant enough to seize that chance. I didn’t see a similar chance: possibly because it no longer existed; almost certainly because, even if it did, I wouldn’t have had the right mind for it. I found a different chance, to work at the intersection of physics and computer science that was finally kicking into high gear at the end of the 20th century. Interestingly, while I came to that intersection from the CS side, quite a few who were originally trained as high-energy physicists ended up there as well—including a star PhD student of Steve Weinberg’s named John Preskill.

Despite his reductionism, Steve was as curious and enthusiastic about quantum computation as he was about a hundred other topics beyond particle physics—he even ended his quantum mechanics textbook with a chapter about Shor’s factoring algorithm. Having said that, a central reason for his enthusiasm about QC was that he clearly saw how demanding a test it would be of quantum mechanics itself—and as I mentioned earlier, Steve was open to the possibility that quantum mechanics might not be exactly true.


It would be an understatement to call Steve “left-of-center.” He believed in higher taxes on rich people like himself to service a robust social safety net. When Trump won, Steve remarked to me that most of the disgusting and outrageous things Trump would do could be reversed in a generation or so—but not the aggressive climate change denial; that actually could matter on the scale of centuries. Steve made the news in Austin for openly defying the Texas law forcing public universities to allow concealed carry on campus: he said that, regardless of what the law said, firearms would not be welcome in his classroom. (Louise, Steve’s wife for 67 years and a professor at UT Austin’s law school, also wrote perhaps the definitive scholarly takedown of the shameful Bush vs. Gore Supreme Court decision, which installed George W. Bush as president.)

All the same, during the “science wars” of the 1990s, Steve was scathing about the academic left’s postmodernist streak and deeply sympathetic to what Alan Sokal had done with his Social Text hoax. Steve also once told me that, when he (like other UT faculty) was required to write a statement about what he would do to advance Diversity, Equity, and Inclusion, he submitted just a single sentence: “I will seek the best candidates, without regard to race or sex.” I remarked that he might be one of the only academics who could get away with that.

I confess that, for the past five years, knowing Steve was a greater source of psychological strength for me than, from a rational standpoint, it probably should have been. Regular readers will know that I’ve spent months of my life agonizing over various nasty things people have said me about on Twitter and Reddit—that I’m a sexist white male douchebag, a clueless techbro STEMlord, a neoliberal Zionist shill, and I forget what else.

But I lately have had a secret emotional weapon that helped somewhat: namely, the certainty that Steven Weinberg had more intellectual power in a single toenail clipping than these Twitter-attackers had collectively experienced over the course of their lives. It’s like, have you heard the joke where two rabbis are arguing some point of Talmud, and then God speaks from a booming thundercloud to declare that the first rabbi is right, and then the second rabbi says “OK fine, now it’s 2 against 1?” For the W and Z bosons and Higgs boson that you predicted to turn up at the particle accelerator is not exactly God declaring from a thundercloud that the way your mind works is aligned with the way the world actually is—Steve, of course, would wince at the suggestion—but it’s about the closest thing available in this universe. My secret emotional weapon was that I knew the man who’d experienced this, arguably more than any of the 7.6 billion other living humans, and not only did that man not sneer at me, but by some freakish coincidence, he seemed to have reached roughly the same views as I had on >95% of controversial questions where we both had strong opinions.


My final conversations with Steve Weinberg were about a laptop. When covid started in March 2020, Steve and Louise, being in their late 80s, naturally didn’t want to take chances, and rigorously sheltered at home. But an issue emerged: Steve couldn’t install Zoom on his Bronze Age computer, and so couldn’t participate in the virtual meetings of his own group, nor could he do Zoom calls with his daughter and granddaughter. While as a theoretical computer scientist, I don’t normally volunteer myself as tech support staff, I decided that an exception was more than warranted in this case. The quickest solution was to configure one of my own old laptops with everything Steve needed and bring it over to his house.

Later, Steve emailed me to say that, while the laptop had worked great and been a lifesaver, he’d finally bought his own laptop, so I should come by to pick mine up. I delayed and delayed with that, but finally decided I should do it before leaving Austin at the beginning of this summer. So I emailed Steve to tell him I’d be coming. He replied to me asking Louise to leave the laptop on the porch — but the email was addressed only to me, not her.

At that moment, I knew something had changed: only a year before, incredibly, I’d been more senile and out-of-it as a 39-year-old than Steve had been as an 87-year-old. What I didn’t know at the time was that Steve had sent that email from the hospital when he was close to death. It was the last I heard from him.

(Once I learned what was going on, I did send a get-well note, which I hope Steve saw, saying that I hoped he appreciated that I wasn’t praying for him.)


Besides the quote about good people, bad people, and religion, the other quote of Steve’s that he never managed to live down came from the last pages of The First Three Minutes, his classic 1970s popularization of big-bang cosmology:

The more the universe seems comprehensible, the more it also seems pointless.

In the 1993 epilogue, Steve tempered this with some more hopeful words, nearly as famous:

The effort to understand the universe is one of the very few things which lifts human life a little above the level of farce and gives it some of the grace of tragedy.

It’s not my purpose here to resolve the question of whether life or the universe have a point. What I can say is that, even in his last years, Steve never for a nanosecond acted as if life was pointless. He already had all the material comforts and academic renown anyone could possibly want. He could have spent all day in his swimming pool, or listening to operas. Instead, he continued publishing textbooks—a quantum mechanics textbook in 2012, an astrophysics textbook in 2019, and a “Foundations of Modern Physics” textbook in 2021 (!). As recently as this year, he continued writing papers—and not just “great man reminiscing” papers, but hardcore technical papers. He continued writing with nearly unmatched lucidity for a general audience, in the New York Review of Books and elsewhere. And I can attest that he continued peppering visiting speakers with questions about stellar evolution or whatever else they were experts on—because, more likely than not, he had redone some calculation himself and gotten a subtly different result from what was in the textbooks.

If God exists, I can’t believe He or She would find nothing more interesting to do with Steve than to torture him for his unbelief. More likely, I think, God is right now talking to Steve the same way Steve talked to Aristarchus in To Explain the World: “yes, you were close about the origin of neutrino masses, but here’s the part you were missing…” While, of course, Steve is redoing God’s calculation to be sure.


Feel free to use the comments as a place to share your own memories.


More Steven Weinberg memorial links (I’ll continue adding to this over the next few days):


Miscellaneous Steven Weinberg links

Slowly emerging from blog-hibervacation

July 21st, 2021

Alright everyone:

  1. Victor Galitski has an impassioned rant against out-of-control quantum computing hype, which I enjoyed and enthusiastically recommend, although I wished Galitski had engaged a bit more with the strongest arguments for optimism (e.g., the recent sampling-based supremacy experiments, the extrapolations that show gate fidelities crossing the fault-tolerance threshold within the next decade). Even if I’ve been saying similar things on this blog for 15 years, I clearly haven’t been doing so in a style that works for everyone. Quantum information needs as many people as possible who will tell the truth as best they see it, unencumbered by any competing interests, and has nothing legitimate to fear from that. The modern intersection of quantum theory and computer science has raised profound scientific questions that will be with us for decades to come. It’s a lily that need not be gilded with hype.
  2. Last month Limaye, Srinivasan, and Tavenas posted an exciting preprint to ECCC, which apparently proves the first (slightly) superpolynomial lower bound on the size of constant-depth arithmetic circuits, over fields of characteristic 0. Assuming it’s correct, this is another small victory in the generations-long war against the P vs. NP problem.
  3. I’m grateful to the Texas Democratic legislators who fled the state to prevent the legislature, a couple miles from my house, having a quorum to enact new voting restrictions, and who thereby drew national attention to the enormity of what’s at stake. It should go without saying that, if a minority gets to rule indefinitely by forcing through laws to suppress the votes of a majority that would otherwise unseat it, thereby giving itself the power to force through more such laws, etc., then we no longer live in a democracy but in a banana republic. And there’s no symmetry to the situation: no matter how terrified you (or I) might feel about wokeists and their denunciation campaigns, the Democrats have no comparable effort to suppress Republican votes. Alas, I don’t know of any solutions beyond the obvious one, of trying to deal the conspiracy-addled grievance party crushing defeats in 2022 and 2024.
  4. Added: Here’s the video of my recent Astral Codex Ten ask-me-anything session.

Open thread on new quantum supremacy claims

July 4th, 2021

Happy 4th to those in the US!

The group of Chaoyang Lu and Jianwei Pan, based at USTC in China, has been on a serious quantum supremacy tear lately. Recall that last December, USTC announced the achievement of quantum supremacy via Gaussian BosonSampling, with 50-70 detected photons—the second claim of sampling-based quantum supremacy, after Google’s in Fall 2019. However, skeptics then poked holes in the USTC claim, showing how they could spoof the results with a classical computer, basically by reproducing the k-photon correlations for relatively small values of k. Debate over the details continues, but the Chinese group seeks to render the debate largely moot with a new and better Gaussian BosonSampling experiment, with 144 modes and up to 113 detected photons. They say they were able to measure k-photon correlations for k up to about 19, which if true would constitute a serious obstacle to the classical simulation strategies that people discussed for the previous experiment.

In the meantime, though, an overlapping group of authors had put out another paper the day before (!) reporting a sampling-based quantum supremacy experiment using superconducting qubits—extremely similar to what Google did (the same circuit depth and everything), except now with 56 qubits rather than 53.

I confess that I haven’t yet studied either paper in detail—among other reasons, because I’m on vacation with my family at the beach, and because I’m trying to spend what work-time I have on my own projects. But anyone who has read them, please use the comments of this post to discuss! Hopefully I’ll learn something.

To confine myself to some general comments: since Google’s announcement in Fall 2019, I’ve consistently said that sampling-based quantum supremacy is not yet a done deal. I’ve said that quantum supremacy seems important enough to want independent replications, and demonstrations in other hardware platforms like ion traps and photonics, and better gate fidelity, and better classical hardness, and better verification protocols. Most of all, I’ve said that we needed a genuine dialogue between the “quantum supremacists” and the classical skeptics: the former doing experiments and releasing all their data, the latter trying to design efficient classical simulations for those experiments, and so on in an iterative process. Just like in applied cryptography, we’d only have real confidence in a quantum supremacy claim once it had survived at least a few years of attacks by skeptics. So I’m delighted that this is precisely what’s now happening. USTC’s papers are two new volleys in this back-and-forth; we all eagerly await the next volley, whichever side it comes from.

While I’ve been trying for years to move away from the expectation that I blog about each and every QC announcement that someone messages me about, maybe I’ll also say a word about the recent announcement by IBM of a quantum advantage in space complexity (see here for popular article and here for arXiv preprint). There appears to be a nice theoretical result here, about the ability to evaluate any symmetric Boolean function with a single qubit in a branching-program-like model. I’d love to understand that result better. But to answer the question I received, this is another case where, once you know the protocol, you know both that the experiment can be done and exactly what its result will be (namely, the thing predicted by QM). So I think the interest is almost entirely in the protocol itself.

STOC’2021 and BosonSampling

June 23rd, 2021

Happy birthday to Alan Turing!

This week I’m participating virtually in STOC’2021, which today had a celebration of the 50th anniversary of NP-completeness (featuring Steve Cook, Richard Karp, Leonid Levin, Christos Papadimitriou, and Avi Wigderson), and which tomorrow will have a day’s worth of quantum computing content, including a tutorial on MIP*=RE, two quantum sessions, and an invited talk on quantum supremacy by John Martinis. I confess that I’m not a fan of GatherTown, the platform being used for STOC. Basically, you get a little avatar who wanders around a virtual hotel lobby and enters sessions—but it seems to reproduce all of the frustrating and annoying parts of experience without any of the good parts.

Ah! But I got the surprising news that Alex Arkhipov and I are among the winners of STOC’s first-ever “Test of Time Award,” for our paper on BosonSampling. It feels strange to win a “Test of Time” award for work that we did in 2011, which still seems like yesterday to me. All the more since the experimental status and prospects of quantum supremacy via BosonSampling are still very much live, unresolved questions.

Speaking of which: on Monday, Alexey Rubtsov, of the Skolkovo Institute in Moscow, gave a talk for our quantum information group meeting at UT, about his recent work with Popova on classically simulating Gaussian BosonSampling. From the talk, I learned something extremely important. I had imagined that their simulation must take advantage of the high rate of photon loss in actual experiments (like the USTC experiment from late 2020), because how else are you going to simulate BosonSampling efficiently? But Rubtsov explained that that’s not how it works at all. While their algorithm is heuristic and remains to be rigorously analyzed, numerical studies suggest that it works even with no photon losses or other errors. Having said that, their algorithm works:

  • only for Gaussian BosonSampling, not Fock-state BosonSampling (as Arkhipov and I had originally proposed),
  • only for threshold detectors, not photon-counting detectors, and
  • only for a small number of modes (say, linear in the number of photons), not for a large number of modes (say, quadratic in the number of photons) as in the original proposal.

So, bottom line, it now looks like the USTC experiment, amazing engineering achievement though it was, is not hard to spoof with a classical computer. If so, this is because of multiple ways in which the experiment differed from my and Arkhipov’s original theoretical proposal. We know exactly what those ways are—indeed, you can find them in my earlier blog posts on the subject—and hopefully they can be addressed in future experiments. All in all, then, we’re left with a powerful demonstration of the continuing relevance of formal hardness reductions, and the danger of replacing them with intuitions and “well, it still seems hard to me.” So I hope the committee won’t rescind my and Arkhipov’s Test of Time Award based on these developments in the past couple weeks!

On Guilt

June 10th, 2021

The other night Dana and I watched “The Internet’s Own Boy,” the 2014 documentary about the life and work of Aaron Swartz, which I’d somehow missed when it came out. Swartz, for anyone who doesn’t remember, was the child prodigy who helped create RSS and Reddit, who then became a campaigner for an open Internet, who was arrested for using a laptop in an MIT supply closet to download millions of journal articles and threatened with decades in prison, and who then committed suicide at age 26. I regret that I never knew Swartz, though he did once send me a fan email about Quantum Computing Since Democritus.

Say whatever you want about the tactical wisdom or the legality of Swartz’s actions; it seems inarguable to me that he was morally correct, that certain categories of information (e.g. legal opinions and taxpayer-funded scientific papers) need to be made freely available, and that sooner or later our civilization will catch up to Swartz and regard his position as completely obvious. The beautifully-made documentary filled me with rage and guilt not only that the world had failed Swartz, but that I personally had failed him.

At the time of Swartz’s arrest, prosecution, and suicide, I was an MIT CS professor who’d previously written in strong support of open access to scientific literature, and who had the platform of this blog. Had I understood what was going on with Swartz—had I taken the time to find out what was going on—I could have been in a good position to help organize a grassroots campaign to pressure the MIT administration to urge prosecutors to drop the case (like JSTOR had already done), which could plausibly have made a difference. As it was, I was preoccupied in those years with BosonSampling, getting married, etc., I didn’t bother to learn whether anything was being done or could be done about the Aaron Swartz matter, and then before I knew it, Swartz had joined Alan Turing in computer science’s pantheon of lost geniuses.

But maybe there was something deeper to my inaction. If I’d strongly defended the substance of what Swartz had done, it would’ve raised the question: why wasn’t I doing the same? Why was I merely complaining about paywalled journals from the comfort of my professor’s office, rather than putting my own freedom on the line like Swartz was? It was as though I had to put some psychological distance between myself and the situation, in order to justify my life choices to myself.

Even though I see the error in that way of “thinking,” it keeps recurring, keeps causing me to make choices that I feel guilt or at least regret about later. In February 2020, there were a few smart people saying that a new viral pneumonia from Wuhan was about to upend life on earth, but the people around me certainly weren’t acting that way, and I wasn’t acting that way either … and so, “for the sake of internal consistency,” I didn’t spend much time thinking about it or investigating it. After all, if the fears of a global pandemic had a good chance of being true, I should be dropping everything else and panicking, shouldn’t I? But I wasn’t dropping everything else and panicking … so how could the fears be true?

Then I publicly repented, and resolved not to make such an error again. And now, 15 months later, I realize that I have made such an error again.

All throughout the pandemic, I’d ask my friends, privately, why the hypothesis that the virus had accidentally leaked from the Wuhan Institute of Virology wasn’t being taken far more seriously, given what seemed like a shockingly strong prima facie case. But I didn’t discuss the lab leak scenario on this blog, except once in passing. I could say I didn’t discuss it because I’m not a virologist and I had nothing new to contribute. But I worry that I also didn’t discuss it because it seemed incompatible with my self-conception as a cautious scientist who’s skeptical of lurid coverups and conspiracies—and because I’d already spent my “weirdness capital” on other issues, and didn’t relish the prospect of being sneered at on social media yet again. Instead I simply waited for discussion of the lab leak hypothesis to become “safe” and “respectable,” as today it finally has, thanks to writers who were more courageous than I was. I became, basically, another sheep in one of the conformist herds that we rightly despise when we read about them in history.

(For all that, it’s still plausible to me that the virus had a natural origin after all. What’s become clear is simply that, even if so, the failure to take the possibility of a lab escape more seriously back when the trail of evidence was fresher will stand as a major intellectual scandal of our time.)

Sometimes people are wracked with guilt, but over completely different things than the world wants them to be wracked with guilt over. This was one of the great lessons that I learned from reading Richard Rhodes’s The Making of the Atomic Bomb. Many of the Manhattan Project physicists felt lifelong guilt, not that they’d participated in building the bomb, but only that they hadn’t finished the bomb by 1943, when it could have ended the war in Europe and the Holocaust.

On a much smaller scale, I suppose some readers would still like me to feel guilt about comment 171, or some of the other stuff I wrote about nerds, dating, and feminism … or if not that, then maybe about my defense of a two-state solution for Israel and Palestine, or of standardized tests and accelerated math programs, or maybe my vehement condemnation of Trump and his failed insurrection. Or any of the dozens of other times when I stood up and said something I actually believed, or when I recounted my experiences as accurately as I could. The truth is, though, I don’t.

Looking back—which, now that I’m 40, I confess is an increasingly large fraction of my time—the pattern seems consistent. I feel guilty, not for having stood up for what I strongly believed in, but for having failed to do so. This suggests that, if I want fewer regrets, then I should click “Publish” on more potentially controversial posts! I don’t know how to force myself to do that, but maybe this post itself is a step.

More quantum computing popularization!

June 8th, 2021

I now have a feature article up at Quanta magazine, entitled “What Makes Quantum Computing So Hard To Explain?” I.e., why do journalists, investors, etc. so consistently get central points wrong, even after the subject has been in public consciousness for more than 25 years? Perhaps unsurprisingly, I found it hard to discuss that meta-level question, as Quanta‘s editors asked me to do, without also engaging in the object-level task of actually explaining QC. For regular Shtetl-Optimized readers, there will be nothing new here, but I’m happy with how the piece turned out.

Accompanying the Quanta piece is a 10-minute YouTube explainer on quantum computing, which (besides snazzy graphics) features interviews with me, John Preskill, and Dorit Aharonov.

On a different note, my colleague Mark Wilde has recorded a punk-rock song about BosonSampling. I can honestly report that it’s some of the finest boson-themed music I’ve heard in years. It includes the following lyrics:

Quantum computer, Ain’t no loser
Quantum computer, Quantum computer

People out on the streets
They don’t know what it is
They think it finds the cliques
Or finds graph colorings
But it don’t solve anything
Said it don’t solve anything
Bosonic slot machine
My lil’ photonic dream

Speaking of BosonSampling, A. S. Popova and A. N. Rubtsov, of the Skolkovo Institute in Moscow, have a new preprint entitled Cracking the Quantum Advantage threshold for Gaussian Boson Sampling. In it, they claim to give an efficient classical algorithm to simulate noisy GBS experiments, like the one six months ago from USTC in China. I’m still unsure how well this scales from 30-40 photons up to 50-70 photons; which imperfections of the USTC experiment are primarily being taken advantage of (photon losses?); and how this relates to the earlier proposed classical algorithms for simulating noisy BosonSampling, like the one by Kalai and Kindler. Anyone with any insight is welcome to share!

OK, one last announcement: the Simons Institute for the Theory of Computing, in Berkeley, has a new online lecture series called “Breakthroughs,” which many readers of this blog might want to check out.

Three updates

June 1st, 2021
  1. Hooray, I’m today’s “Featured ACM Member”! Which basically means, yet another interview with me about quantum computing, with questions including what’s most surprised me about the development of QC, and what students should do to get into the field.
  2. I’m proud to announce that An Automated Approach to the Collatz Conjecture, a paper by Emre Yolcu, myself, and Marijn Heule that we started working on over four years ago, is finally available on the arXiv, and will be presented at the 2021 Conference on Automated Deduction. Long story short: no, we didn’t prove Collatz, but we have an approach that can for the first time prove certain Collatz-like statements in a fully automated way, so hopefully that’s interesting! There was also a Quanta article even before our paper had come out (I wasn’t thrilled about the timing).
  3. The legendary Baba Brinkman has a new rap about quantum computing (hat tip to blog commenter YD). Having just watched the music video, I see it as one of the better popularization efforts our field has seen in the past 25 years—more coherent than the average journalistic account and with a much better backbeat. (I do, however, take a more guarded view than Brinkman of the potential applications, especially to e.g. autonomous driving and supply-chain optimization.)