Archive for the ‘Rage Against Doofosity’ Category

Never go to “Planet Word” in Washington DC

Friday, March 15th, 2024

In fact, don’t try to take kids to Washington DC if you can possibly avoid it.

This is my public service announcement. This is the value I feel I can add to the world today.

Dana and I decided to take the kids to DC for spring break. The trip, alas, has been hell—a constant struggle against logistical failures. The first days were mostly spent sitting in traffic or searching for phantom parking spaces that didn’t exist. (So then we switched to the Metro, and promptly got lost, and had our metro cards rejected by the machines.) Or, at crowded cafes, I spent the time searching for a table so my starving kids could eat—and then when I finally found a table, a woman, smug and sure-faced, evicted us from the table because she was “going to” sit there, and my kids had to see that their dad could not provide for their basic needs, and that woman will never face any consequence for what she did.

Anyway, this afternoon, utterly frazzled and stressed and defeated, we entered “Planet Word,” a museum about language. Sounds pretty good, right? Except my soon-to-be 7-year-old son got bored by numerous exhibits that weren’t for him. So I told him he could lead the way and find any exhibit he liked.

Finally my son found an exhibit that fascinated him, one where he could weigh plastic fruits on a balancing scale. He was engrossed by it, he was learning, he was asking questions, I reflected that maybe the trip wasn’t a total loss … and that’s when a museum employee pointed at us, and screamed at us to leave the room, because “this exhibit was sold out.”

The room was actually almost empty (!). No one had stopped us from entering the room. No one else was waiting to use the balancing scale. There was no sign to warn us we were doing anything wrong. I would’ve paid them hundreds of dollars in that moment if only we could stay. My son didn’t understand why he was suddenly treated as a delinquent. He then wanted to leave the whole museum, and so did I. The day was ruined for us.

Mustering my courage to do something uncharacteristic for me, I complained at the front desk. They sneered and snickered at me, basically told me to go to hell. Looking deeply into their dumb, blank expressions, I realized that I had as much chance of any comprehension or sympathy as I’d have from a warthog. It’s true that, on the scale of all the injustices in the history of the world, this one surely didn’t crack the top quadrillion. But for me, in that moment, it came to stand for all the others. Which has always been my main weakness as a person, that injustice affects me in that way.

Speaking of which, there was one part of DC trip that went exactly like it was supposed to. That was our visit to the United States Holocaust Memorial Museum. Why? Because I feel like that museum, unlike all the rest, tells me the truth about the nature of the world that I was born into—and seeing the truth is perversely comforting. I was born into a world that right now, every day, is filled with protesters screaming for my death, for my family’s death—and this is accepted as normal, and those protesters sleep soundly at night, congratulating themselves for their progressivism and enlightenment. And thinking about those protesters, and their predecessors 80 years ago who perpetrated the Holocaust or who stood by and let it happen, is the only thing that really puts blankfaced museum employees into perspective for me. Like, of course a world with the former is also going to have the latter—and I should count myself immeasurably lucky if the latter is all I have to deal with, if the empty-skulled and the soul-dead can only ruin my vacation and lack the power to murder my family.

And to anyone who reached the end of this post and who feels like it was an unwelcome imposition on their time: I’m sorry. But the truth is, posts like this are why I started this blog and why I continue it. If I’ve ever imparted any interesting information or ideas, that’s a byproduct that I’m thrilled about. But I’m cursed to be someone who wakes up every morning, walks around every day, and goes to sleep every night crushed by the weight of the world’s injustice, and outside of technical subjects, the only thing that’s ever motivated me to write is that words are the only justice available to me.

On being faceless

Wednesday, March 6th, 2024

Update: Alright, I’m back in. (After trying the same recovery mechanisms that didn’t work before, but suddenly did work this afternoon.) Thanks also to the Facebook employee who emailed offering to help. Now I just need to decide the harder question of whether I want to be back in!


So I’ve been locked out of Facebook and Messenger, possibly forever. It started yesterday morning, when Facebook went down for the entire world. Now it’s back up for most people, but I can’t get in—neither with passwords (none of which work), nor with text messages to my phone (my phone doesn’t receive them for some reason). As a last-ditch measure, I submitted my driver’s license into a Facebook black hole from which I don’t expect to hear back.

Incidentally, this sort of thing is why, 25 years ago, I became a theoretical rather than applied computer scientist. Even before you get to any serious software engineering, the applied part of computing involves a neverending struggle to make machines do what you need them to do—get a document to print, a website to load, a software package to install—in ways that are harrowing and not the slightest bit intellectually interesting. You learn, not about the nature of reality, but only about the terrible design decisions of other people. I might as well be a 90-year-old grandpa with such things, and if I didn’t have the excuse of being a theorist, that fact would constantly humiliate me before my colleagues.

Anyway, maybe some Facebook employee will see this post and decide to let me back in. Otherwise, it feels like a large part of my life has been cut away forever—but maybe that’s good, like cutting away a malignant tumor. Maybe, even if I am let back in, I should refrain from returning, or at least severely limit the time I spend there.

The truth is that, over the past eight years or so, I let more and more of my online activity shift from this blog to Facebook. Partly that’s because (as many others have lamented) the Golden Age of Blogs came to an end, with intellectual exploration and good-faith debate replaced by trolling, sniping, impersonation, and constant attempts to dox opponents and ruin their lives. As a result, more and more ideas for new blog posts stayed in my drafts folder—they always needed just one more revision to fortify them against inevitable attack, and then that one more revision never happened. It was simply more comfortable to post my ideas on Facebook, where the feedback came from friends and colleagues using their real names, and where any mistakes I made would be contained. But, on the reflection that comes from being locked out, maybe Facebook was simply a trap. What I have neither the intellectual courage to say in public, nor the occasion to say over dinner with real-life friends and family and colleagues, maybe I should teach myself not to say at all.

Weird but cavity-free

Friday, December 8th, 2023

Over at Astral Codex Ten, the other Scott A. blogs in detail about a genetically engineered mouth bacterium that metabolizes sugar into alcohol rather than acid, thereby (assuming it works as intended) ending dental cavities forever. Despite good results in trials with hundreds of people, this bacterium has spent decades in FDA approval hell. It’s in the news because Lantern Bioworks, a startup founded by rationalists, is now trying again to legalize it.

Just another weird idea that will never see the light of day, I’d think … if I didn’t have these bacteria in my mouth right now.

Here’s how it happened: I’d read earlier about these bacteria, and was venting to a rationalist of my acquaintance about the blankfaces who keep that and a thousand other medical advances from ever reaching the public, and who sleep soundly at night, congratulating themselves for their rigor in enforcing nonsensical rules.

“Are you serious?” the rationalist asked me. “I know the people in Berkeley who can get you into the clinical trial for this.”

This was my moment of decision. If I agreed to put unapproved bacteria into my mouth on my next trip to Berkeley, I could live my beliefs and possibly never get cavities again … but on the other hand, friends and colleagues would think I was weird when I told them.

Then again, I mused, four years ago most people would think you were weird if you said that a pneumonia spreading at a seafood market in Wuhan was about to ignite a global pandemic, and also that chatbots were about to go from ELIZA-like jokes to the technological powerhouses transforming civilization.

And so it was that I found myself brushing a salty, milky-white substance onto my teeth. That was last month. I … haven’t had any cavities since, for what it’s worth? Nor have I felt drunk, despite the ever-so-slightly elaevated ethanol in my system. Then again, I’m not even 100% sure that the bacteria took, given that (I confess) the germy substance strongly triggered my gag reflex.

Anyway, read other Scott’s post, and then ask yourself: will you try this, once you can? If not, is it just because it seems too weird?

Update: See a Hacker News thread where the merits of this new treatment are debated.

Book Review: “Quantum Supremacy” by Michio Kaku (tl;dr DO NOT BUY)

Friday, May 19th, 2023

Update (June 6): I wish to clarify that I did not write any of the dialogue for the “Scott Aaronson” character who refutes Michio Kaku’s quantum computing hype in this YouTube video, which uses an AI recreation of my voice. The writer appears to be physics/math blogger and podcaster Hassaan Saleem; see his website here. Luckily, the character and I do share many common views; I’m sure we’d hit it off if we met.


When I was a teenager, I enjoyed reading Hyperspace, an early popularization of string theory by the theoretical physicist Michio Kaku. I’m sure I’d have plenty of criticisms if I reread it today, but at the time, I liked it a lot. In the decades since, Kaku has widened his ambit to, well, pretty much everything, regularly churning out popular books with subtitles like “How Science Will Revolutionize the 21st Century” and “How Science Will Shape Human Destiny and Our Daily Lives.” He’s also appeared on countless TV specials, in many cases to argue that UFOs likely contain extraterrestrial visitors.

Now Kaku has a new bestseller about quantum computing, creatively entitled Quantum Supremacy. He even appeared on Joe Rogan a couple weeks ago to promote the book, surely reaching an orders-of-magnitude larger audience than I have in two decades of trying to explain quantum computing to non-experts. (Incidentally, to those who’ve asked why Joe Rogan hasn’t invited me on his show to explain quantum computing: I guess you now have an answer of sorts!)

In the spirit, perhaps, of the TikTokkers who eat live cockroaches or whatever to satisfy their viewers, I decided to oblige loyal Shtetl-Optimized fans by buying Quantum Supremacy and reading it. So I can now state with confidence: beating out a crowded field, this is the worst book about quantum computing, for some definition of the word “about,” that I’ve ever encountered.

Admittedly, it’s not obvious why I’m reviewing the book here at all. Among people who’ve heard of this blog, I expect that approximately zero would be tempted to buy Kaku’s book, at least if they flipped through a few random pages and saw the … level of care that went into them. Conversely, the book’s target readers have probably never visited a blog like this one and never will. So what’s the use of this post?

Well, as the accidental #1 quantum computing blogger on the planet, I feel a sort of grim obligation here. Who knows, maybe this post will show up in the first page of Google results for Kaku’s book, and it will manage to rescue two or three people from the kindergarten of lies.


Where to begin? Should we just go through the first chapter with a red pen? OK then: on the very first page, Kaku writes,

Google revealed that their Sycamore quantum computer could solve a mathematical problem in 200 seconds that would take 10,000 years on the world’s fastest supercomputer.

No, the “10,000 years” estimate was quickly falsified, as anyone following the subject knows. I’d be the first to stress that the situation is complicated; compared to the best currently-known classical algorithms, some quantum advantage remains for the Random Circuit Sampling task, depending on how you measure it. But to repeat the “10,000 years” figure at this point, with no qualifications, is actively misleading.

Turning to the second page:

[Quantum computers] are a new type of computer that can tackle problems that digital computers can never solve, even with an infinite amount of time. For example, digital computers can never accurately calculate how atoms combine to create crucial chemical reactions, especially those that make life possible. Digital computers can only compute on digital tape, consisting of a series of 0s and 1s, which are too crude to describe the delicate waves of electrons dancing deep inside a molecule. For example, when tediously computing the paths taken by a mouse in a maze, a digital computer has to painfully analyze each possible path, one after the other. A quantum computer, however, simultaneously analyzes all possible paths at the same time, with lightning speed.

OK, so here Kaku has already perpetuated two of the most basic, forehead-banging errors about what quantum computers can do. In truth, anything that a QC can calculate, a classical computer can calculate as well, given exponentially more time: for example, by representing the entire wavefunction, all 2n amplitudes, to whatever accuracy is needed. That’s why it was understood from the very beginning that quantum computers can’t change what’s computable, but only how efficiently things can be computed.

And then there’s the Misconception of Misconceptions, about how a QC “analyzes all possible paths at the same time”—with no recognition anywhere of the central difficulty, the thing that makes a QC enormously weaker than an exponentially parallel classical computer, but is also the new and interesting part, namely that you only get to see a single, random outcome when you measure, with its probability given by the Born rule. That’s the error so common that I warn against it right below the title of my blog.

[Q]uantum computers are so powerful that, in principle, they could break all known cybercodes.

Nope, that’s strongly believed to be false, just like the analogous statement for classical computers. Despite its obvious relevance for business and policy types, the entire field of post-quantum cryptography—including the lattice-based public-key cryptosystems that have by now survived 20+ years of efforts to find a quantum algorithm to break them—receives just a single vague mention, on pages 84-85. The possibility of cryptography surviving quantum computers is quickly dismissed because “these new trapdoor functions are not easy to implement.” (But they have been implemented.)


There’s no attempt, anywhere in this book, to explain how any quantum algorithm actually works, let alone is there a word anywhere about the limitations of quantum algorithms. And yet there’s still enough said to be wrong. On page 84, shortly after confusing the concept of a one-way function with that of a trapdoor function, Kaku writes:

Let N represent the number we wish to factorize. For an ordinary digital computer, the amount of time it takes to factorize a number grows exponentially, like t ~ eN, times some unimportant factors.

This is a double howler: first, trial division takes only ~√N time; Kaku has confused N itself with its number of digits, ~log2N. Second, he seems unaware that much better classical factoring algorithms, like the Number Field Sieve, have been known for decades, even though those algorithms play a central role in codebreaking and in any discussion of where the quantum/classical crossover might happen.


Honestly, though, the errors aren’t the worst of it. The majority of the book is not even worth hunting for errors in, because fundamentally, it’s filler.

First there’s page after page breathlessly quoting prestigious-sounding people and organizations—Google’s Sundar Pichai, various government agencies, some report by Deloitte—about just how revolutionary they think quantum computing will be. Then there are capsule hagiographies of Babbage and Lovelace, Gödel and Turing, Planck and Einstein, Feynman and Everett.

And then the bulk of the book is actually about stuff with no direct relation to quantum computing at all—the origin of life, climate change, energy generation, cancer, curing aging, etc.—except with ungrounded speculations tacked onto the end of each chapter about how quantum computers will someday revolutionize all of this. Personally, I’d say that

  1. Quantum simulation speeding up progress in biochemistry, high-temperature superconductivity, and the like is at least plausible—though very far from guaranteed, since one has to beat the cleverest classical approaches that can be designed for the same problems (a point that Kaku nowhere grapples with).
  2. The stuff involving optimization, machine learning, and the like is almost entirely wishful thinking.
  3. Not once in the book has Kaku even mentioned the intellectual tools (e.g., looking at actual quantum algorithms like Grover’s algorithm or phase estimation, and their performance on various tasks) that would be needed to distinguish 1 from 2.

In his acknowledgments section, Kaku simply lists a bunch of famous scientists he’s met in his life—Feynman, Witten, Hawking, Penrose, Brian Greene, Lisa Randall, Neil deGrasse Tyson. Not a single living quantum computing researcher is acknowledged, not one.

Recently, I’d been cautiously optimistic that, after decades of overblown headlines about “trying all answers in parallel,” “cracking all known codes,” etc., the standard for quantum computing popularization was slowly creeping upward. Maybe I was just bowled over by this recent YouTube video (“How Quantum Computers Break the Internet… Starting Now”), which despite its clickbait title and its slick presentation, miraculously gets essentially everything right, shaming the hypesters by demonstrating just how much better it’s possible to do.

Kaku’s slapdash “book,” and the publicity campaign around it, represents a noxious step backwards. The wonder of it, to me, is Kaku holds a PhD in theoretical physics. And yet the average English major who’s written a “what’s the deal with quantum computing?” article for some obscure link aggregator site has done a more careful and honest job than Kaku has. That’s setting the bar about a millimeter off the floor. I think the difference is, at least the English major knows that they’re supposed to call an expert or two, when writing about an enormously complicated subject of which they’re completely ignorant.


Update: I’ve now been immersed in the AI safety field for one year, let I wouldn’t consider myself nearly ready to write a book on the subject. My knowledge of related parts of CS, my year studying AI in grad school, and my having created the subject of computational learning theory of quantum states would all be relevant but totally insufficient. And AI safety, for all its importance, has less than quantum computing does in the way of difficult-to-understand concepts and results that basically everyone in the field agrees about. And if I did someday write such a book, I’d be pretty terrified of getting stuff wrong, and would have multiple expert colleagues read drafts.

In case this wasn’t clear enough from my post, Kaku appears to have had zero prior engagement with quantum computing, and also to have consulted zero relevant experts who could’ve fixed his misconceptions.

Brief Update on Texan Tenure

Sunday, May 7th, 2023

Update (May 8): Some tentative good news! It looks like there’s now a compromise bill in the House that would preserve tenure, insisting only on the sort of post-tenure review that UT (like most universities) already has.

Update (May 9): Alas, it looks like the revised bill is not much better. See this thread from Keith Whittington of the Academic Freedom Alliance.


I blogged a few weeks ago about SB 18, a bill that would end tenure at Texas public universities, including UT Austin and Texas A&M. The bad news is that SB 18 passed the Texas Senate. The good news is that I’m told—I don’t know how reliably—that it has little chance of passing the House.

But it’s going to be discussed in the House tomorrow. Any Texas residents reading this can, and are strongly urged, to submit brief comments here. Please note that the deadline is tomorrow (Monday) morning.

I just submitted the comment below. Obviously, among the arguments that I genuinely believe, I made only those that I expect might have some purchase on a Texas Republican.


I’m a professor of computer science at UT Austin, specializing in quantum computing.  I am however writing this statement strictly in my capacity as a private citizen and Texas resident, not in my professional capacity.

Like the supporters of SB 18, I too see leftist ideological indoctrination on college campuses as a serious problem.  It’s something that I and many other moderates and classical liberals in academia have been pushing back on for years.

But my purpose in this comment is to explain why eliminating tenure at UT Austin and Texas A&M is NOT the solution — indeed, it would be the equivalent of treating a tumor by murdering the patient.

I’ve seen firsthand how already, just the *threat* that SB 18 might pass has seriously hampered our ability to recruit the best scientists and engineers to become faculty at UT Austin.  If this bill were actually to pass, I expect that the impact on our recruiting would be total and catastrophic.  It would effectively mean the end of UT Austin as one of the top public universities in the country.  Hundreds of scientists who were lured to Texas by UT’s excellence, including me and my wife, would start looking for jobs elsewhere — even those whose own tenure was “grandfathered in.”  They’d leave en masse for California and Massachusetts and anywhere else they could continue the lives they’d planned.

The reality is this: the sorts of scientists and engineers we’re talking about could typically make vastly higher incomes, in the high six figures or even seven figures, by working in private industry or forming their own startups.  Yet they choose to accept much lower salaries to spend their careers in academia.  Why?  Because of the promise of a certain way of life: one where they can speak freely as scholars and individuals without worrying about how it will affect their employment.  Tenure is a central part of that promise.  Remove it, and the value proposition collapses.

In some sense, the state of Texas (like nearly every other state) actually gets a bargain through tenure.  It couldn’t possibly afford to retain top-caliber scientists and engineers — working on medical breakthroughs, revolutionary advances in AI, and all the other stuff — if it DIDN’T offer tenure.

For this reason, I hope that even conservatives in the Texas House will see that we have a common interest here, in ensuring SB 18 never even makes it out of committee — for the sake of the future of innovation in Texas.  I’m open to other possible responses to the problem of political indoctrination on campus.

Will UT Austin and Texas A&M survive beyond this week?

Monday, April 17th, 2023

Update (April 20): Alas, the Texas Senate has approved SB 18. The survival of higher education in Texas now hinges on this bill not being taken up or passed in the House, or not being enforced as written (e.g., because UT’s existing post-tenure review system is judged to satisfy it).


This week, the Texas Senate will take up SB 18, a bill to ban the granting of tenure at all public universities in Texas, including UT Austin and Texas A&M. (Those of us who have tenure would retain it, for what little that’s worth.)

[Update: I’ve learned that, even if this bill passes the Senate, there’s a good chance that it will get watered down or die in the House, or found to be satisfied by UT’s existing system of post-tenure review. That’s the only reason why people in the know aren’t panicking even more than they are.]

I find it hard to imagine that SB 18 will actually pass both houses and be enforced as written, simply because it’s obvious that if it did, it would be the end of UT Austin and Texas A&M as leading research universities. More precisely, it would be the immediate end of our ability to recruit competitively, and the slightly slower end of our competitiveness period, as faculty with options moved elsewhere. This is so because of the economics of faculty hiring. Particularly in STEM fields like computer science, those who become professors typically forgo vastly higher salaries in industry, not to mention equity in startup companies and so on. Why would we do such a nutty thing? Because we like a certain lifestyle. We’re willing to move several economic strata downward in return for jobs where (in principle) no one can fire us without cause, or tell us what we’re allowed to say or publish. The evidence from industry labs (Google, Facebook, Microsoft, etc.) suggests that, in competitive fields, for Texas to attract and retain top faculty without tenure would require paying them hundreds of thousands more per year. In that sense, tenure is a bargain for universities and the state. Of course the situation is a bit different for art history and English literature, but in any case SB 18 makes no distinction between fields.

The Texas Senate is considering two other bills this week: SB 17, which would ban all DEI (Diversity, Equity, and Inclusion) programs, offices, and practices at public universities, and SB 16, which would require the firing of any professor if they “compel or attempt to compel a student … to adopt a belief that any race, sex, or ethnicity or social, political, or religious belief is inherently superior to any other race, sex, ethnicity, or belief.” (The language here seems sloppy to me: is liberal democracy “inherently superior” to Nazism? Would teaching students about the horrors of Nazism count as “attempting to compel them” to accept this superiority?)

Taken together, it’s clear that the goal is to hit back hard against “wokeness” in academia, and thereby satisfy the Republican base.

Here’s the thing: there really is an illiberal ideology that’s taken over parts of academia (not all of it)—an ideology that Tim Urban, in his wonderful recent book What’s Our Problem?, usefully terms “Social Justice Fundamentalism” or SJF, to distinguish it sharply from “Liberal Social Justice,” the ideology of (for example) the Civil Rights movement. Now, I’m on record as not a fan of the SJF ideology, to put it mildly, and the SJF ideology is on record as not a fan of me. In 2015, I was infamously dragged through the mud of Salon, The New Republic, Raw Story, and many other magazines and websites for a single blog comment criticizing a form of feminism that had contributed to making my life miserable, even while I proudly called myself a liberal feminist (and still do). More recently, wokesters have written to my department chair trying to get me disciplined or fired, for everything from my use of the now-verboten term “quantum supremacy,” to a reference to female breasts in a poem I wrote as a student that was still on my homepage. (These attempts thankfully went nowhere. Notwithstanding what you read, sanity retains many strongholds in academia.)

Anyway, despite all of this, the Texas Republicans have somehow succeeded in making me more afraid of them, purely on the level of professional survival, than I’ve ever been of the Social Justice Fundamentalists. In effect, the Republicans propose to solve the “problem of wokeness” by simply dropping thermonuclear weapons on all Texas public universities, thereby taking out me and my colleagues as collateral damage—regardless of our own views on wokeness or anything else, and regardless of what we’re doing for Texas’ scientific competitiveness.

I don’t expect that most of my readers, in or out of Texas, will need to be persuaded about any of this—nor am I expecting to change many minds on the other side. Mostly, I’m writing this post in the hope that some well-connected moderates here in Austin will link to it, and the post might thereby play a tiny role in helping Texas’ first-rate public universities live one more day. (And to any such moderates: yes, I’m happy to meet in person with you or your colleagues, if that would help!) Some posts are here on this blog for no better reason than, y’know, moral obligation.

Xavier Waintal responds (tl;dr Grover is still quadratically faster)

Thursday, March 23rd, 2023

This morning Xavier Waintal, coauthor of the new arXiv preprint “””refuting””” Grover’s algorithm, which I dismantled here yesterday, emailed me a two-paragraph response. He remarked that the “classy” thing for me to do would be to post the response on my blog, but: “I would totally understand if you did not want to be contradicted in your own zone of influence.”

Here is Waintal’s response, exactly as sent to me:

The elephant in the (quantum computing) room: opening the Pandora box of the quantum oracle

One of the problem we face in the field of quantum computing is a vast diversity of cultures between, say, complexity theorists on one hand and physicists on the other hand. The former define mathematical objects and consider any mathematical problem as legitimate. The hypothesis are never questioned, by definition. Physicists on the other hand spend their life questioning the hypothesis, wondering if they do apply to the real world. This dichotomy is particularly acute in the context of the emblematic Grover search algorithm, one of the cornerstone of quantum computing. Grover’s algorithm uses the concept of “oracle”, a black box function that one can call, but of which one is forbidden to see the source code. There are well known complexity theorems that show that in this context a quantum computer can solve the “search problem” faster than a classical computer.

But because we closed our eyes and decided not to look at the source code does not mean it does not exist. In https://arxiv.org/pdf/2303.11317.pdf, Miles Stoudenmire and I deconstruct the concept of oracle and show that as soon as we give the same input to both quantum and classical computers (the quantum circuit used to program the oracle on the actual quantum hardware) then the *generic* quantum advantage disappears. The charge of the proof is reversed: one must prove certain properties of the quantum circuit in order to possibly have a theoretical quantum advantage. More importantly – for the physicist that I am – our classical algorithm is very fast and we show that we can solve large instances of any search problem. This means that even for problems where *asymptotically* quantum computers are faster than classical ones, the crossing point where they become so is for astronomically large computing time, in the tens of millions of years. Who is willing to wait that long for the answer to a single question, even if the answer is 42?

The above explicitly confirms something that I realized immediately on reading the preprint, and that fully explains the tone of my response. Namely, Stoudenmire and Waintal’s beef isn’t merely with Grover’s algorithm, or even with the black-box model; it’s with the entire field of complexity theory. If they were right that complexity theorists never “questioned hypotheses” or wondered what did or didn’t apply to the real world, then complexity theory shouldn’t exist in CS departments at all—at most it should exist in pure math departments.

But a converse claim is also true. Namely, suppose it turned out that complexity theorists had already fully understood, for decades, all the elementary points Stoudenmire and Waintal were making about oracles versus explicit circuits. Suppose complexity theorists hadn’t actually been confused, at all, about under what sorts of circumstances the square-root speedup of Grover’s algorithm was (1) provable, (2) plausible but unproven, or (3) nonexistent. Suppose they’d also been intimately familiar with the phenomenon of asymptotically faster algorithms that get swamped in practice by unfavorable constants, and with the overhead of quantum error-correction. Suppose, indeed, that complexity theorists hadn’t merely understood all this stuff, but expressed it clearly and accurately where Stoudenmire and Waintal’s presentation was garbled and mixed with absurdities (e.g., the Grover problem “being classically solvable with a linear number of queries,” the Grover speedup not being “generic,” their being able to “solve large instances of any search problem” … does that include, for example, CircuitSAT? do they still not get the point about CircuitSAT?). Then Stoudenmire and Waintal’s whole objection would collapse.

Anyway, we don’t have to suppose! In the SciRate discussion of the preprint, a commenter named Bibek Pokharel helpfully digs up some undergraduate lecture notes from 2017 that are perfectly clear about what Stoudenmire and Waintal treat as revelations (though one could even go 20+ years earlier). The notes are focused here on Simon’s algorithm, but the discussion generalizes to any quantum black-box algorithm, including Grover’s:

The difficulty in claiming that we’re getting a quantum speedup [via Simon’s algorithm] is that, once we pin down the details of how we’re computing [the oracle function] f—so, for example, the actual matrix A [such that f(x)=Ax]—we then need to compare against classical algorithms that know those details as well. And as soon as we reveal the innards of the black box, the odds of an efficient classical solution become much higher! So for example, if we knew the matrix A, then we could solve Simon’s problem in classical polynomial time just by calculating A‘s nullspace. More generally, it’s not clear whether anyone to this day has found a straightforward “application” of Simon’s algorithm, in the sense of a class of efficiently computable functions f that satisfy the Simon promise, and for which any classical algorithm plausibly needs exponential time to solve Simon’s problem, even if the algorithm is given the code for f.

In the same lecture notes, one can find the following discussion of Grover’s algorithm, and how its unconditional square-root speedup becomes conditional (albeit, still extremely plausible in many cases) as soon as the black box is instantiated by an explicit circuit:

For an NP-complete problem like CircuitSAT, we can be pretty confident that the Grover speedup is real, because no one has found any classical algorithm that’s even slightly better than brute force. On the other hand, for more “structured” NP-complete problems, we do know exponential-time algorithms that are faster than brute force. For example, 3SAT is solvable classically in about O(1.3n) time. So then, the question becomes a subtle one of whether Grover’s algorithm can be combined with the best classical tricks that we know to achieve a polynomial speedup even compared to a classical algorithm that uses the same tricks. For many NP-complete problems the answer seems to be yes, but it need not be yes for all of them.

The notes in question were written by some random complexity theorist named Scot Aronsen (sp?). But if you don’t want to take it from that guy, then take it from (for example) the Google quantum chemist Ryan Babbush, again on the SciRate page:

It is well understood that applying Grover’s algorithm to 3-SAT in the standard way would not give a quadratic speedup over the best classical algorithm for 3-SAT in the worst case (and especially not on average). But there are problems for which Grover is expected to give a quadratic speedup over any classical algorithm in the worst case. For example, the problem “Circuit SAT” starts by me handing you a specification of a poly-size classical circuit with AND/OR/NOT gates, so it’s all explicit. Then you need to solve SAT on this circuit. Classically we strongly believe it will take time 2^n (this is even the basis of many conjectures in complexity theory, like the exponential time hypothesis), and quantumly we know it can be done with 2^{n/2}*poly(n) quantum gates using Grover and the explicitly given classical circuit. So while I think there are some very nice insights in this paper, the statement in the title “Grover’s Algorithm Offers No Quantum Advantage” seems untrue in a general theoretical sense. Of course, this is putting aside issues with the overheads of error-correction for quadratic speedups (a well understood practical matter that is resolved by going to large problem sizes that wouldn’t be available to the first fault-tolerant quantum computers). What am I missing?

More generally, over the past few days, as far as I can tell, every actual expert in quantum algorithms who’s looked at Stoudenmire and Waintal’s preprint—every one, whether complexity theorist or physicist or chemist—has reached essentially the same conclusions about it that I did. The one big difference is that many of the experts, who are undoubtedly better people than I am, extended a level of charity to Stoudenmire and Waintal (“well, this of course seems untrue, but here’s what it could have meant”) that Stoudenmire and Waintal themselves very conspicuously failed to extend to complexity theory.

Of course Grover’s algorithm offers a quantum advantage!

Wednesday, March 22nd, 2023

Unrelated Update: Huge congratulations to Ethernet inventor Bob Metcalfe, for winning UT Austin’s third Turing Award after Dijkstra and Emerson!

And also to mathematician Luis Caffarelli, for winning UT Austin’s third Abel Prize!


I was really, really hoping that I’d be able to avoid blogging about this new arXiv preprint, by E. M. Stoudenmire and Xavier Waintal:

Grover’s Algorithm Offers No Quantum Advantage

Grover’s algorithm is one of the primary algorithms offered as evidence that quantum computers can provide an advantage over classical computers. It involves an “oracle” (external quantum subroutine) which must be specified for a given application and whose internal structure is not part of the formal scaling of the quantum speedup guaranteed by the algorithm. Grover’s algorithm also requires exponentially many steps to succeed, raising the question of its implementation on near-term, non-error-corrected hardware and indeed even on error-corrected quantum computers. In this work, we construct a quantum inspired algorithm, executable on a classical computer, that performs Grover’s task in a linear number of call to the oracle – an exponentially smaller number than Grover’s algorithm – and demonstrate this algorithm explicitly for boolean satisfiability problems (3-SAT). Our finding implies that there is no a priori theoretical quantum speedup associated with Grover’s algorithm. We critically examine the possibility of a practical speedup, a possibility that depends on the nature of the quantum circuit associated with the oracle. We argue that the unfavorable scaling of the success probability of Grover’s algorithm, which in the presence of noise decays as the exponential of the exponential of the number of qubits, makes a practical speedup unrealistic even under extremely optimistic assumptions on both hardware quality and availability.

Alas, inquiries from journalists soon made it clear that silence on my part wasn’t an option.

So, desperately seeking an escape, this morning I asked GPT-4 to read the preprint and comment on it just like I would. Sadly, it turns out the technology isn’t quite ready to replace me at this blogging task. I suppose I should feel good: in every such instance, either I’m vindicated in all my recent screaming here about generative AI—what the naysayers call “glorified autocomplete”—being on the brink of remaking civilization, or else I still, for another few months at least, have a role to play on the Internet.

So, on to the preprint, as reviewed by the human Scott Aaronson. Yeah, it’s basically a tissue of confusions, a mishmash of the well-known and the mistaken. As they say, both novel and correct, but not in the same places.

The paper’s most eye-popping claim is that the Grover search problem—namely, finding an n-bit string x such that f(x)=1, given oracle access to a Boolean function f:{0,1}n→{0,1}—is solvable classically, using a number of calls that’s only linear in n, or in many cases only constant (!!). Since this claim contradicts a well-known, easily provable lower bound—namely, that Ω(2n) oracle calls are needed for classical brute-force searching—the authors must be using words in nonstandard ways, leaving only the question of how.

It turns out that, for their “quantum-inspired classical algorithm,” the authors assume you’re given, not merely an oracle for f, but the actual circuit to compute f. They then use that circuit in a non-oracular way to extract the marked item. In which case, I’d prefer to say that they’ve actually solved the Grover problem with zero queries—simply because they’ve entirely left the black-box setting where Grover’s algorithm is normally formulated!

What could possibly justify such a move? Well, the authors argue that sometimes one can use the actual circuit to do better classically than Grover’s algorithm would do quantumly, and therefore, they’ve shown that the Grover speedup is not “generic,” as the quantum algorithms people always say it is.

But this is pure wordplay around the meaning of “generic.” When we say that Grover’s algorithm achieves a “generic” square-root speedup, what we mean is that it solves the generic black-box search problem in O(2n/2) queries, whereas any classical algorithm for that generic problem requires Ω(2n) queries. We don’t mean that for every f, Grover achieves a quadratic speedup for searching that f, compared to the best classical algorithm that could be tailored to that f. Of course we don’t; that would be trivially false!

Remarkably, later in the paper, the authors seem to realize that they haven’t delivered the knockout blow against Grover’s algorithm that they’d hoped for, because they then turn around and argue that, well, even for those f’s where Grover does provide a quadratic speedup over the best (or best-known) classical algorithm, noise and decoherence could negate the advantage in practice, and solving that problem would require a fault-tolerant quantum computer, but fault-tolerance could require an enormous overhead, pushing a practical Grover speedup far into the future.

The response here boils down to “no duh.” Yes, if Grover’s algorithm can yield any practical advantage in the medium term, it will either be because we’ve discovered much cheaper ways to do quantum fault-tolerance, or else because we’ve discovered “NISQy” ways to exploit the Grover speedup, which avoid the need for full fault-tolerance—for example, via quantum annealing. The prospects are actually better for a medium-term advantage from Shor’s factoring algorithm, because of its exponential speedup. Hopefully everyone in quantum computing theory has realized all this for a long time.

Anyway, as you can see, by this point we’ve already conceded the principle of Grover’s algorithm, and are just haggling over the practicalities! Which brings us back to the authors’ original claim to have a principled argument against the Grover speedup, which (as I said) rests on a confusion over words.

Some people dread the day when GPT will replace them. In my case, for this task, I can’t wait.


Thanks to students Yuxuan Zhang (UT) and Alex Meiburg (UCSB) for discussions of the Stoudenmire-Waintal preprint that informed this post. Of course, I take sole blame for anything anyone dislikes about the post!


For a much more technical response—one that explains how this preprint’s detailed attempt to simulate Grover classically fails, rather than merely proving that it must fail—check out this comment by Alex Meiburg.

Visas for Chinese students: US shoots itself in the foot again

Tuesday, February 14th, 2023

Coming out of blog-hiatus for some important stuff, today, tomorrow, and the rest of the week.

Something distressing happened to me yesterday for the first time, but I fear not the last. We (UT Austin) admitted a PhD student from China who I know to be excellent, and who wanted to work with me. That student, alas, has had to decline our offer of admission, because he’s been denied a US visa under Section 212(A)(3)(a)(i), which “prohibits the issuance of a visa to anyone who seeks to enter the United States to violate or evade any law prohibiting the export from the United States of goods, technology, or sensitive information.” Quantum computing, you see, is now a “prohibited technology.”

This visa denial is actually one that the American embassy in Beijing only just now got around to issuing, from when the student applied for a US visa a year and a half ago, to come visit me for a semester as an undergrad. For context, the last time I had an undergrad from China visit me for a semester, back in 2016, the undergrad’s name was Lijie Chen. Lijie just finished his PhD at MIT under Ryan Williams and is now one of the superstars of theoretical computer science. Anyway, in Fall 2021 I got an inquiry from a Chinese student who bowled me over the same way Lijie had, so I invited him to spend a semester with my group in Austin. This time, alas, the student never heard back when he applied for a visa, and was therefore unable to come. He ended up doing an excellent research project with me anyway, working remotely from China, checking in by Zoom, and even participating in our research group meetings (which were on Zoom anyway because of the pandemic).

Anyway, for reasons too complicated to explain, this previous denial means that the student would almost certainly be denied for a new visa to come to the US to do a PhD in quantum computing. (Unless some immigration lawyer reading this can suggest a way out!) The student is not sure what he’s going to do next, but it might involve staying in China, or applying in Europe, or applying in the US again after a year but without mentioning the word “quantum.”

It should go without saying, to anyone reading this, that the student wants to do basic research in quantum complexity theory that’s extraordinarily unlikely to have any direct military or economic importance … just like my own research! 🙂 And it should also go without saying that, if the US really wanted to strike a blow against authoritarianism in Beijing, then it could hardly do better than to hand out visas to every Chinese STEM student and researcher who wanted to come here. Yes, some would return to China with their new skills, but a great many would choose to stay in the US … if we let them.

And I’ve pointed all this out to a Republican Congressman, and to people in the military and intelligence agencies, when they asked me “what else the US can do to win the quantum computing race against China?” And I’ll continue to say it to anyone who asks. The Congressman, incidentally, even said that he privately agreed with me, but that the issue was politically difficult. I wonder: is there anyone in power in the US, in either party, who doesn’t privately agree that opening the gates to China’s STEM talent would be a win/win proposition for the US … including for the US’s national security? If so, who are these people? Is this just a naked-emperor situation, where everyone in Congress fears to raise the issue because they fear backlash from someone else, but the someone else is actually thinking the same way?

And to any American who says, “yeah, but China totally deserves it, because of that spy balloon, and their threats against Taiwan, and all the spying they do with TikTok”—I mean, like, imagine if someone tried to get back at the US government for the Iraq War or for CIA psyops or whatever else by punishing you, by curtailing your academic dreams. It would make exactly as much sense.

Cargo Cult Quantum Factoring

Wednesday, January 4th, 2023

Just days after we celebrated my wife’s 40th birthday, she came down with COVID, meaning she’s been isolating and I’ve been spending almost all my time dealing with our kids.

But if experience has taught me anything, it’s that the quantum hype train never slows down. In the past 24 hours, at least four people have emailed to ask me about a new paper entitled “Factoring integers with sublinear resources on a superconducting quantum processor.” Even the security expert Bruce Schneier, while skeptical, took the paper surprisingly seriously.

The paper claims … well, it’s hard to pin down what it claims, but it’s certainly given many people the impression that there’s been a decisive advance on how to factor huge integers, and thereby break the RSA cryptosystem, using a near-term quantum computer. Not by using Shor’s Algorithm, mind you, but by using the deceptively similarly named Schnorr’s Algorithm. The latter is a classical algorithm based on lattices, which the authors then “enhance” using the heuristic quantum optimization method called QAOA.

For those who don’t care to read further, here is my 3-word review:

No. Just No.

And here’s my slightly longer review:

Schnorr ≠ Shor. Yes, even when Schnorr’s algorithm is dubiously “enhanced” using QAOA—a quantum algorithm that, incredibly, for all the hundreds of papers written about it, has not yet been convincingly argued to yield any speedup for any problem whatsoever (besides, as it were, the problem of reproducing its own pattern of errors) (one possible recent exception from Sami Boulebnane and Ashley Montanaro).

In the new paper, the authors spend page after page saying-without-saying that it might soon become possible to break RSA-2048, using a NISQ (i.e., non-fault-tolerant) quantum computer. They do so via two time-tested strategems:

  1. the detailed exploration of irrelevancies (mostly, optimization of the number of qubits, while ignoring the number of gates), and
  2. complete silence about the one crucial point.

Then, finally, they come clean about the one crucial point in a single sentence of the Conclusion section:

It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA.

“Unclear” is an understatement here. It seems to me that a miracle would be required for the approach here to yield any benefit at all, compared to just running the classical Schnorr’s algorithm on your laptop. And if the latter were able to break RSA, it would’ve already done so.

All told, this is one of the most actively misleading quantum computing papers I’ve seen in 25 years, and I’ve seen … many. Having said that, this actually isn’t the first time I’ve encountered the strange idea that the exponential quantum speedup for factoring integers, which we know about from Shor’s algorithm, should somehow “rub off” onto quantum optimization heuristics that embody none of the actual insights of Shor’s algorithm, as if by sympathetic magic. Since this idea needs a name, I’d hereby like to propose Cargo Cult Quantum Factoring.

And with that, I feel I’ve adequately discharged my duties here to sanity and truth. If I’m slow to answer comments, it’ll be because I’m dealing with two screaming kids.