Archive for the ‘Complexity’ Category

Of course Grover’s algorithm offers a quantum advantage!

Wednesday, March 22nd, 2023

Unrelated Update: Huge congratulations to Ethernet inventor Bob Metcalfe, for winning UT Austin’s third Turing Award after Dijkstra and Emerson!

And also to mathematician Luis Caffarelli, for winning UT Austin’s third Abel Prize!


I was really, really hoping that I’d be able to avoid blogging about this new arXiv preprint, by E. M. Stoudenmire and Xavier Waintal:

Grover’s Algorithm Offers No Quantum Advantage

Grover’s algorithm is one of the primary algorithms offered as evidence that quantum computers can provide an advantage over classical computers. It involves an “oracle” (external quantum subroutine) which must be specified for a given application and whose internal structure is not part of the formal scaling of the quantum speedup guaranteed by the algorithm. Grover’s algorithm also requires exponentially many steps to succeed, raising the question of its implementation on near-term, non-error-corrected hardware and indeed even on error-corrected quantum computers. In this work, we construct a quantum inspired algorithm, executable on a classical computer, that performs Grover’s task in a linear number of call to the oracle – an exponentially smaller number than Grover’s algorithm – and demonstrate this algorithm explicitly for boolean satisfiability problems (3-SAT). Our finding implies that there is no a priori theoretical quantum speedup associated with Grover’s algorithm. We critically examine the possibility of a practical speedup, a possibility that depends on the nature of the quantum circuit associated with the oracle. We argue that the unfavorable scaling of the success probability of Grover’s algorithm, which in the presence of noise decays as the exponential of the exponential of the number of qubits, makes a practical speedup unrealistic even under extremely optimistic assumptions on both hardware quality and availability.

Alas, inquiries from journalists soon made it clear that silence on my part wasn’t an option.

So, desperately seeking an escape, this morning I asked GPT-4 to read the preprint and comment on it just like I would. Sadly, it turns out the technology isn’t quite ready to replace me at this blogging task. I suppose I should feel good: in every such instance, either I’m vindicated in all my recent screaming here about generative AI—what the naysayers call “glorified autocomplete”—being on the brink of remaking civilization, or else I still, for another few months at least, have a role to play on the Internet.

So, on to the preprint, as reviewed by the human Scott Aaronson. Yeah, it’s basically a tissue of confusions, a mishmash of the well-known and the mistaken. As they say, both novel and correct, but not in the same places.

The paper’s most eye-popping claim is that the Grover search problem—namely, finding an n-bit string x such that f(x)=1, given oracle access to a Boolean function f:{0,1}n→{0,1}—is solvable classically, using a number of calls that’s only linear in n, or in many cases only constant (!!). Since this claim contradicts a well-known, easily provable lower bound—namely, that Ω(2n) oracle calls are needed for classical brute-force searching—the authors must be using words in nonstandard ways, leaving only the question of how.

It turns out that, for their “quantum-inspired classical algorithm,” the authors assume you’re given, not merely an oracle for f, but the actual circuit to compute f. They then use that circuit in a non-oracular way to extract the marked item. In which case, I’d prefer to say that they’ve actually solved the Grover problem with zero queries—simply because they’ve entirely left the black-box setting where Grover’s algorithm is normally formulated!

What could possibly justify such a move? Well, the authors argue that sometimes one can use the actual circuit to do better classically than Grover’s algorithm would do quantumly, and therefore, they’ve shown that the Grover speedup is not “generic,” as the quantum algorithms people always say it is.

But this is pure wordplay around the meaning of “generic.” When we say that Grover’s algorithm achieves a “generic” square-root speedup, what we mean is that it solves the generic black-box search problem in O(2n/2) queries, whereas any classical algorithm for that generic problem requires Ω(2n) queries. We don’t mean that for every f, Grover achieves a quadratic speedup for searching that f, compared to the best classical algorithm that could be tailored to that f. Of course we don’t; that would be trivially false!

Remarkably, later in the paper, the authors seem to realize that they haven’t delivered the knockout blow against Grover’s algorithm that they’d hoped for, because they then turn around and argue that, well, even for those f’s where Grover does provide a quadratic speedup over the best (or best-known) classical algorithm, noise and decoherence could negate the advantage in practice, and solving that problem would require a fault-tolerant quantum computer, but fault-tolerance could require an enormous overhead, pushing a practical Grover speedup far into the future.

The response here boils down to “no duh.” Yes, if Grover’s algorithm can yield any practical advantage in the medium term, it will either be because we’ve discovered much cheaper ways to do quantum fault-tolerance, or else because we’ve discovered “NISQy” ways to exploit the Grover speedup, which avoid the need for full fault-tolerance—for example, via quantum annealing. The prospects are actually better for a medium-term advantage from Shor’s factoring algorithm, because of its exponential speedup. Hopefully everyone in quantum computing theory has realized all this for a long time.

Anyway, as you can see, by this point we’ve already conceded the principle of Grover’s algorithm, and are just haggling over the practicalities! Which brings us back to the authors’ original claim to have a principled argument against the Grover speedup, which (as I said) rests on a confusion over words.

Some people dread the day when GPT will replace them. In my case, for this task, I can’t wait.


Thanks to students Yuxuan Zhang (UT) and Alex Meiburg (UCSB) for discussions of the Stoudenmire-Waintal preprint that informed this post. Of course, I take sole blame for anything anyone dislikes about the post!


For a much more technical response—one that explains how this preprint’s detailed attempt to simulate Grover classically fails, rather than merely proving that it must fail—check out this comment by Alex Meiburg.

Cargo Cult Quantum Factoring

Wednesday, January 4th, 2023

Just days after we celebrated my wife’s 40th birthday, she came down with COVID, meaning she’s been isolating and I’ve been spending almost all my time dealing with our kids.

But if experience has taught me anything, it’s that the quantum hype train never slows down. In the past 24 hours, at least four people have emailed to ask me about a new paper entitled “Factoring integers with sublinear resources on a superconducting quantum processor.” Even the security expert Bruce Schneier, while skeptical, took the paper surprisingly seriously.

The paper claims … well, it’s hard to pin down what it claims, but it’s certainly given many people the impression that there’s been a decisive advance on how to factor huge integers, and thereby break the RSA cryptosystem, using a near-term quantum computer. Not by using Shor’s Algorithm, mind you, but by using the deceptively similarly named Schnorr’s Algorithm. The latter is a classical algorithm based on lattices, which the authors then “enhance” using the heuristic quantum optimization method called QAOA.

For those who don’t care to read further, here is my 3-word review:

No. Just No.

And here’s my slightly longer review:

Schnorr ≠ Shor. Yes, even when Schnorr’s algorithm is dubiously “enhanced” using QAOA—a quantum algorithm that, incredibly, for all the hundreds of papers written about it, has not yet been convincingly argued to yield any speedup for any problem whatsoever (besides, as it were, the problem of reproducing its own pattern of errors) (one possible recent exception from Sami Boulebnane and Ashley Montanaro).

In the new paper, the authors spend page after page saying-without-saying that it might soon become possible to break RSA-2048, using a NISQ (i.e., non-fault-tolerant) quantum computer. They do so via two time-tested strategems:

  1. the detailed exploration of irrelevancies (mostly, optimization of the number of qubits, while ignoring the number of gates), and
  2. complete silence about the one crucial point.

Then, finally, they come clean about the one crucial point in a single sentence of the Conclusion section:

It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA.

“Unclear” is an understatement here. It seems to me that a miracle would be required for the approach here to yield any benefit at all, compared to just running the classical Schnorr’s algorithm on your laptop. And if the latter were able to break RSA, it would’ve already done so.

All told, this is one of the most actively misleading quantum computing papers I’ve seen in 25 years, and I’ve seen … many. Having said that, this actually isn’t the first time I’ve encountered the strange idea that the exponential quantum speedup for factoring integers, which we know about from Shor’s algorithm, should somehow “rub off” onto quantum optimization heuristics that embody none of the actual insights of Shor’s algorithm, as if by sympathetic magic. Since this idea needs a name, I’d hereby like to propose Cargo Cult Quantum Factoring.

And with that, I feel I’ve adequately discharged my duties here to sanity and truth. If I’m slow to answer comments, it’ll be because I’m dealing with two screaming kids.

Google’s Sycamore chip: no wormholes, no superfast classical simulation either

Friday, December 2nd, 2022

Update (Dec. 6): I’m having a blast at the Workshop on Spacetime and Quantum Information at the Institute for Advanced Study in Princeton. I’m learning a huge amount from the talks and discussions here—and also simply enjoying being back in Princeton, to see old friends and visit old haunts like the Bent Spoon. Tomorrow I’ll speak about my recent work with Jason Pollack on polynomial-time AdS bulk reconstruction. [New: click here for video of my talk!]

But there’s one thing, relevant to this post, that I can’t let pass without comment. Tonight, David Nirenberg, Director of the IAS and a medieval historian, gave an after-dinner speech to our workshop, centered around how auspicious it was that the workshop was being held a mere week after the momentous announcement of a holographic wormhole on a microchip (!!)—a feat that experts were calling the first-ever laboratory investigation of quantum gravity, and a new frontier for experimental physics itself. Nirenberg asked whether, a century from now, people might look back on the wormhole achievement as today we look back on Eddington’s 1919 eclipse observations providing the evidence for general relativity.

I confess: this was the first time I felt visceral anger, rather than mere bemusement, over this wormhole affair. Before, I had implicitly assumed: no one was actually hoodwinked by this. No one really, literally believed that this little 9-qubit simulation opened up a wormhole, or helped prove the holographic nature of the real universe, or anything like that. I was wrong.

To be clear, I don’t blame Professor Nirenberg at all. If I were a medieval historian, everything he said about the experiment’s historic significance might strike me as perfectly valid inferences from what I’d read in the press. I don’t blame the It from Qubit community—most of which, I can report, was grinding its teeth and turning red in the face right alongside me. I don’t even blame most of the authors of the wormhole paper, such as Daniel Jafferis, who gave a perfectly sober, reasonable, technical talk at the workshop about how he and others managed to compress a simulation of a variant of the SYK model into a mere 9 qubits—a talk that eschewed all claims of historic significance and of literal wormhole creation.

But it’s now clear to me that, between

(1) the It from Qubit community that likes to explore speculative ideas like holographic wormholes, and

(2) the lay news readers who are now under the impression that Google just did one of the greatest physics experiments of all time,

something went terribly wrong—something that risks damaging trust in the scientific process itself. And I think it’s worth reflecting on what we can do to prevent it from happening again.


This is going to be one of the many Shtetl-Optimized posts that I didn’t feel like writing, but was given no choice but to write.

News, social media, and my inbox have been abuzz with two claims about Google’s Sycamore quantum processor, the one that now has 72 superconducting qubits.

The first claim is that Sycamore created a wormhole (!)—a historic feat possible only with a quantum computer. See for example the New York Times and Quanta and Ars Technica and Nature (and of course, the actual paper), as well as Peter Woit’s blog and Chad Orzel’s blog.

The second claim is that Sycamore’s pretensions to quantum supremacy have been refuted. The latter claim is based on this recent preprint by Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, and Umesh Vazirani. No one—least of all me!—doubts that these authors have proved a strong new technical result, solving a significant open problem in the theory of noisy random circuit sampling. On the other hand, it might be less obvious how to interpret their result and put it in context. See also a YouTube video of Yunchao speaking about the new result at this week’s Simons Institute Quantum Colloquium, and of a panel discussion afterwards, where Yunchao, Umesh Vazirani, Adam Bouland, Sergio Boixo, and your humble blogger discuss what it means.

On their face, the two claims about Sycamore might seem to be in tension. After all, if Sycamore can’t do anything beyond what a classical computer can do, then how exactly did it bend the topology of spacetime?

I submit that neither claim is true. On the one hand, Sycamore did not “create a wormhole.” On the other hand, it remains pretty hard to simulate with a classical computer, as far as anyone knows. To summarize, then, our knowledge of what Sycamore can and can’t do remains much the same as last week or last month!


Let’s start with the wormhole thing. I can’t really improve over how I put it in Dennis Overbye’s NYT piece:

“The most important thing I’d want New York Times readers to understand is this,” Scott Aaronson, a quantum computing expert at the University of Texas in Austin, wrote in an email. “If this experiment has brought a wormhole into actual physical existence, then a strong case could be made that you, too, bring a wormhole into actual physical existence every time you sketch one with pen and paper.”

More broadly, Overbye’s NYT piece explains with admirable clarity what this experiment did and didn’t do—leaving only the question “wait … if that’s all that’s going on here, then why is it being written up in the NYT??” This is a rare case where, in my opinion, the NYT did a much better job than Quanta, which unequivocally accepted and amplified the “QC creates a wormhole” framing.

Alright, but what’s the actual basis for the “QC creates a wormhole” claim, for those who don’t want to leave this blog to read about it? Well, the authors used 9 of Sycamore’s 72 qubits to do a crude simulation of something called the SYK (Sachdev-Ye-Kitaev) model. SYK has become popular as a toy model for quantum gravity. In particular, it has a holographic dual description, which can indeed involve a spacetime with one or more wormholes. So, they ran a quantum circuit that crudely modelled the SYK dual of a scenario with information sent through a wormhole. They then confirmed that the circuit did what it was supposed to do—i.e., what they’d already classically calculated that it would do.

So, the objection is obvious: if someone simulates a black hole on their classical computer, they don’t say they thereby “created a black hole.” Or if they do, journalists don’t uncritically repeat the claim. Why should the standards be different just because we’re talking about a quantum computer rather than a classical one?

Did we at least learn anything new about SYK wormholes from the simulation? Alas, not really, because 9 qubits take a mere 29=512 complex numbers to specify their wavefunction, and are therefore trivial to simulate on a laptop. There’s some argument in the paper that, if the simulation were scaled up to (say) 100 qubits, then maybe we would learn something new about SYK. Even then, however, we’d mostly learn about certain corrections that arise because the simulation was being done with “only” n=100 qubits, rather than in the n→∞ limit where SYK is rigorously understood. But while those corrections, arising when n is “neither too large nor too small,” would surely be interesting to specialists, they’d have no obvious bearing on the prospects for creating real physical wormholes in our universe.

And yet, this is not a sensationalistic misunderstanding invented by journalists. Some prominent quantum gravity theorists themselves—including some of my close friends and collaborators—persist in talking about the simulated SYK wormhole as “actually being” a wormhole. What are they thinking?

Daniel Harlow explained the thinking to me as follows (he stresses that he’s explaining it, not necessarily endorsing it). If you had two entangled quantum computers, one on Earth and the other in the Andromeda galaxy, and if they were both simulating SYK, and if Alice on Earth and Bob in Andromeda both uploaded their own brains into their respective quantum simulations, then it seems possible that the simulated Alice and Bob could have the experience of jumping into a wormhole and meeting each other in the middle. Granted, they couldn’t get a message back out from the wormhole, at least not without “going the long way,” which could happen only at the speed of light—so only simulated-Alice and simulated-Bob themselves could ever test this prediction. Nevertheless, if true, I suppose some would treat it as grounds for regarding a quantum simulation of SYK as “more real” or “more wormholey” than a classical simulation.

Of course, this scenario depends on strong assumptions not merely about quantum gravity, but also about the metaphysics of consciousness! And I’d still prefer to call it a simulated wormhole for simulated people.

For completeness, here’s Harlow’s passage from the NYT article:

Daniel Harlow, a physicist at M.I.T. who was not involved in the experiment, noted that the experiment was based on a model of quantum gravity that was so simple, and unrealistic, that it could just as well have been studied using a pencil and paper.

“So I’d say that this doesn’t teach us anything about quantum gravity that we didn’t already know,” Dr. Harlow wrote in an email. “On the other hand, I think it is exciting as a technical achievement, because if we can’t even do this (and until now we couldn’t), then simulating more interesting quantum gravity theories would CERTAINLY be off the table.” Developing computers big enough to do so might take 10 or 15 years, he added.


Alright, let’s move on to the claim that quantum supremacy has been refuted. What Aharonov et al. actually show in their new work, building on earlier work by Gao and Duan, is that Random Circuit Sampling, with a constant rate of noise per gate and no error-correction, can’t provide a scalable approach to quantum supremacy. Or more precisely: as the number of qubits n goes to infinity, and assuming you’re in the “anti-concentration regime” (which in practice probably means: the depth of your quantum circuit is at least ~log(n)), there’s a classical algorithm to approximately sample the quantum circuit’s output distribution in poly(n) time (albeit, not yet a practical algorithm).

Here’s what’s crucial to understand: this is 100% consistent with what those of us working on quantum supremacy had assumed since at least 2016! We knew that if you tried to scale Random Circuit Sampling to 200 or 500 or 1000 qubits, while you also increased the circuit depth proportionately, the signal-to-noise ratio would become exponentially small, meaning that your quantum speedup would disappear. That’s why, from the very beginning, we targeted the “practical” regime of 50-100 qubits: a regime where

  1. you can still see explicitly that you’re exploiting a 250– or 2100-dimensional Hilbert space for computational advantage, thereby confirming one of the main predictions of quantum computing theory, but
  2. you also have a signal that (as it turned out) is large enough to see with heroic effort.

To their credit, Aharonov et al. explain all this perfectly clearly in their abstract and introduction. I’m just worried that others aren’t reading their paper as carefully as they should be!

So then, what’s the new advance in the Aharonov et al. paper? Well, there had been some hope that circuit depth ~log(n) might be a sweet spot, where an exponential quantum speedup might both exist and survive constant noise, even in the asymptotic limit of n→∞ qubits. Nothing in Google’s or USTC’s actual Random Circuit Sampling experiments depended on that hope, but it would’ve been nice if it were true. What Aharonov et al. have now done is to kill that hope, using powerful techniques involving summing over Feynman paths in the Pauli basis.

Stepping back, what is the current status of quantum supremacy based on Random Circuit Sampling? I would say it’s still standing, but more precariously than I’d like—underscoring the need for new and better quantum supremacy experiments. In more detail, Pan, Chen, and Zhang have shown how to simulate Google’s 53-qubit Sycamore chip classically, using what I estimated to be 100-1000X the electricity cost of running the quantum computer itself (including the dilution refrigerator!). Approaching from the problem from a different angle, Gao et al. have given a polynomial-time classical algorithm for spoofing Google’s Linear Cross-Entropy Benchmark (LXEB)—but their algorithm can currently achieve only about 10% of the excess in LXEB that Google’s experiment found.

So, though it’s been under sustained attack from multiple directions these past few years, I’d say that the flag of quantum supremacy yet waves. The Extended Church-Turing Thesis is still on thin ice. The wormhole is still open. Wait … no … that’s not what I meant to write…


Note: With this post, as with future science posts, all off-topic comments will be ruthlessly left in moderation. Yes, even if the comments “create their own reality” full of anger and disappointment that I talked about what I talked about, instead of what the commenter wanted me to talk about. Even if merely refuting the comments would require me to give in and talk about their preferred topics after all. Please stop. This is a wormholes-‘n-supremacy post.

Dismantling Quantum Hype with Tim Nguyen

Tuesday, November 22nd, 2022

Happy Thanksgiving to my American readers! While I enjoy a family holiday-week vacation in exotic Dallas—and yes, I will follow up on my old JFK post by visiting Dealey Plaza—please enjoy the following Thanksgiving victuals:

I recently recorded a 3-hour (!) YouTube video with Timothy Nguyen, host of the Cartesian Cafe. Our episode is entitled Quantum Computing: Dismantling the Hype. In it, I teach a sort of extremely compressed version of my undergraduate Intro to Quantum Information Science course, unburdening myself about whatever Tim prompts me to explain: the basic rules of quantum information, quantum circuits, the quantum black-box model, the Deutsch-Jozsa algorithm, BQP and its relationship to classical complexity classes, and sampling-based quantum supremacy experiments. This is a lot more technical than an average podcast, a lot less technical than an actual course, and hopefully just right for some nonempty subset of readers.

Outside of his podcasting career, some of you might recognize Nguyen as the coauthor, with Theo Polya, of a rebuttal of “Geometric Unity.” This latter is the proposal by the financier, podcaster, and leading “Intellectual Dark Web” figure Eric Weinstein for a unified theory of particle physics. Now, I slightly know Weinstein, and have even found him fascinating, eloquent, and correct about various issues. So, in an addendum to the main video, Nguyen chats with me about his experience critiquing Weinstein’s theory, and also about something where my knowledge is far greater: namely, my 2002 rebuttal of some of the central claims in Stephen Wolfram’s A New Kind of Science, and whether there are any updates to that story twenty years later.

Enjoy!

Oh right, quantum computing

Monday, October 31st, 2022

These days, I often need to remind myself that, as an undergrad, grad student, postdoc, or professor, I’ve now been doing quantum computing research for a quarter-century—i.e., well over half of the subject’s existence. As a direct result, when I feel completely jaded about a new development in QC, it might actually be exciting. When I feel moderately excited, it might actually be the most exciting thing for years.

With that in mind:


(1) Last week National Public Radio’s Marketplace interviewed me, John Martinis, and others about the current state of quantum computing. While the piece wasn’t entirely hype-free, I’m pleased to report that my own views were represented accurately! To wit:

“There is a tsunami of hype about what quantum computers are going to revolutionize,” said Scott Aaronson, a professor of computer science at the University of Texas at Austin. “Quantum computing has turned into a word that venture capitalists or people seeking government funding will sprinkle on anything because it sounds good.”

Aaronson warned we can’t be certain that these computers will in fact revolutionize machine learning and finance and optimization problems.  “We can’t prove that there’s not a quantum algorithm that solves all these problems super fast, but we can’t even prove there’s not an algorithm for a conventional computer that does it,” he said. [In the recorded version, they replaced this by a simpler but also accurate thought: namely, that we can’t prove one way or the other whether there’s a useful quantum advantage for these tasks.]


(2) I don’t like to use this blog to toot my own research horn, but on Thursday my postdoc Jason Pollack and I released a paper, entitled Discrete Bulk Reconstruction. And to be honest, I’m pretty damned excited about it. It represents about 8 months of Jason—a cosmologist and string theorist who studied under Sean Carroll—helping me understand AdS/CFT in the language of the undergraduate CS curriculum, like min-cuts on undirected graphs, so that we could then look for polynomial-time algorithms to implement the holographic mapping from boundary quantum states to the spatial geometry in the bulk. We drew heavily on previous work in the same direction, especially the already-seminal 2015 holographic entropy cone paper by Ning Bao et al. But I’d like to think that, among other things, our work represents a new frontier in just how accessible AdS/CFT itself can be made to CS and discrete math types. Anyway, here’s the abstract if you’re interested:

According to the AdS/CFT correspondence, the geometries of certain spacetimes are fully determined by quantum states that live on their boundaries — indeed, by the von Neumann entropies of portions of those boundary states. This work investigates to what extent the geometries can be reconstructed from the entropies in polynomial time. Bouland, Fefferman, and Vazirani (2019) argued that the AdS/CFT map can be exponentially complex if one wants to reconstruct regions such as the interiors of black holes. Our main result provides a sort of converse: we show that, in the special case of a single 1D boundary, if the input data consists of a list of entropies of contiguous boundary regions, and if the entropies satisfy a single inequality called Strong Subadditivity, then we can construct a graph model for the bulk in linear time. Moreover, the bulk graph is planar, it has O(N2) vertices (the information-theoretic minimum), and it’s “universal,” with only the edge weights depending on the specific entropies in question. From a combinatorial perspective, our problem boils down to an “inverse” of the famous min-cut problem: rather than being given a graph and asked to find a min-cut, here we’re given the values of min-cuts separating various sets of vertices, and need to find a weighted undirected graph consistent with those values. Our solution to this problem relies on the notion of a “bulkless” graph, which might be of independent interest for AdS/CFT. We also make initial progress on the case of multiple 1D boundaries — where the boundaries could be connected via wormholes — including an upper bound of O(N4) vertices whenever a planar bulk graph exists (thus putting the problem into the complexity class NP).


(3) Anand Natarajan and Chinmay Nirkhe posted a preprint entitled A classical oracle separation between QMA and QCMA, which makes progress on a problem that’s been raised on this blog all the way back to its inception. A bit of context: QMA, Quantum Merlin-Arthur, captures what can be proven using a quantum state with poly(n) qubits as the proof, and a polynomial-time quantum algorithm as the verifier. QCMA, or Quantum Classical Merlin-Arthur, is the same as QMA except that now the proof has to be classical. A fundamental problem of quantum complexity theory, first raised by Aharonov and Naveh in 2002, is whether QMA=QCMA. In 2007, Greg Kuperberg and I introduced the concept of quantum oracle separation—that is, a unitary that can be applied in a black-box manner—in order to show that there’s a quantum oracle relative to which QCMA≠QMA. In 2015, Fefferman and Kimmel improved this, to show that there’s a “randomized in-place” oracle relative to which QCMA≠QMA. Natarajan and Nirkhe now remove the “in-place” part, meaning the only thing still “wrong” with their oracle is that it’s randomized. Derandomizing their construction would finally settle this 20-year-old open problem (except, of course, for the minor detail of whether QMA=QCMA in the “real,” unrelativized world!).


(4) Oh right, the Google group reports the use of their superconducting processor to simulate non-abelian anyons. Cool.

Postdocs, matrix multiplication, and WSJ: yet more shorties

Friday, October 7th, 2022

I’m proud to say that Nick Hunter-Jones and Matteo Ippoliti—both of whom work at the interface between quantum information science and condensed-matter physics (Nick closer to the former and Matteo to the latter)—have joined the physics faculty at UT Austin this year. And Nick, Matteo, and I are jointly seeking postdocs to start in Fall 2023! Please check out our call for applications here. The deadline is December 1; you apply through AcademicJobsOnline rather than by emailing me as in past years.


The big news in AI and complexity theory this week was DeepMind’s AlphaTensor, and its automated discovery of new algorithms for matrix multiplication. (See here for the Nature paper.) More concretely, they’ve used AI to discover (among other things) an algorithm for multiplying 4×4 matrices, over finite fields of characteristic 2, using only 47 scalar multiplications. This beats the 49=7×7 that you’d get from Strassen’s algorithm. There are other improvements for other matrix dimensions, many of which work over fields of other characteristics.

Since I’ve seen confusion about the point on social media: this does not improve over the best known asymptotic exponent for matrix multiplication, which over any field, still stands at the human-discovered 2.373 (meaning, we know how to multiply two N×N matrices in O(N2.373) time, but not faster). But it does asymptotically improve over Strassen’s O(N2.81) algorithm from 1968, conceivably even in a way that could have practical relevance for multiplying hundreds-by-hundreds or thousands-by-thousands matrices over F2.

Way back in 2007, I gave a talk at MIT CSAIL’s “Wild and Crazy Ideas Session,” where I explicitly proposed to use computer search to look for faster algorithms for 4×4 and 5×5 matrix multiplication. The response I got at the time was that it was hopeless, since the search space was already too huge. Of course, that was before the deep learning revolution.


This morning, the Wall Street Journal published an article by Karen Hao about competition between China and the US in quantum computing. Unfortunately paywalled, but includes the following passage:

Meanwhile, American academics say it’s gotten harder for Chinese students to obtain visas to conduct quantum research in the U.S. “It’s become common knowledge that when Chinese students or postdocs come to the U.S., they can’t say they’re doing quantum computing,” says Scott Aaronson, director of the Quantum Information Center at the University of Texas, Austin.

My Quantum Information Science II Lecture Notes: The wait is over!

Wednesday, August 31st, 2022

Here they are [PDF].

They’re 155 pages of awesome—for a certain extremely specific definition of “awesome”—which I’m hereby offering to the world free of charge (for noncommercial use only, of course). They cover material that I taught, for the first time, in my Introduction to Quantum Information Science II undergrad course at UT Austin in Spring 2022.

The new notes pick up exactly where my older QIS I lecture notes left off, and they presuppose familiarity with the QIS I material. So, if you’re just beginning your quantum information journey, then please start with my QIS I notes, which presuppose only linear algebra and a bit of classical algorithms (e.g., recurrence relations and big-O notation), and which self-containedly explain all the rules of QM, moving on to (e.g.) quantum circuits, density matrices, entanglement entropy, Wiesner’s quantum money, QKD, quantum teleportation, the Bell inequality, interpretations of QM, the Shor 9-qubit code, and the algorithms of Deutsch-Jozsa, Bernstein-Vazirani, Simon, Shor, and Grover. Master all that, and you’ll be close to the quantum information research frontier of circa 1996.

My new QIS II notes cover a bunch of topics, but the main theme is “perspectives on quantum computing that go beyond the bare quantum circuit model, and that became increasingly central to the field from the late 1990s onwards.” Thus, it covers:

  • Hamiltonians, ground states, the adiabatic algorithm, and the universality of adiabatic QC
  • The stabilizer formalism, the 1996 Gottesman-Knill Theorem on efficient classical simulation of stabilizer QC, my and Gottesman’s 2004 elaborations, boosting up to universality via “magic states,” transversal codes, and the influential 2016 concept of stabilizer rank
  • Bosons and fermions: the formalism of Fock space and of creation and annihilation operators, connection to the permanents and determinants of matrices, efficient classical simulation of free fermionic systems (Valiant’s 2002 “matchcircuits”), the 2001 Knill-Laflamme-Milburn (KLM) theorem on universal optical QC, BosonSampling and its computational complexity, and the current experimental status of BosonSampling
  • Cluster states, Raussendorf and Briegel’s 2000 measurement-based quantum computation (MBQC), and Gottesman and Chuang’s 1999 “gate teleportation” trick
  • Matrix product states, and Vidal’s 2003 efficient classical simulation of “slightly entangled” quantum computations

Extra bonus topics include:

  • The 2007 Broadbent-Fitzsimons-Kashefi (BFK) protocol for blind and authenticated QC; brief discussion of later developments including Reichardt-Unger-Vazirani 2012 and Mahadev 2018
  • Basic protocols for quantum state tomography
  • My 2007 work on PAC-learnability of quantum states
  • The “dessert course”: the black hole information problem, and the Harlow-Hayden argument on the computational hardness of decoding Hawking radiation

Master all this, and hopefully you’ll have the conceptual vocabulary to understand a large fraction of what people in quantum computing and information care about today, how they now think about building scalable QCs, and what they post to the quant-ph arXiv.

Note that my QIS II course is complementary to my graduate course on quantum complexity theory, for which the lecture notes are here. There’s very little overlap between the two (and even less overlap between QIS II and Quantum Computing Since Democritus).

The biggest, most important topic related to the QIS II theme that I didn’t cover was topological quantum computing. I’d wanted to, but it quickly became clear that topological QC begs for a whole course of its own, and that I had neither the time nor the expertise to do it justice. In retrospect, I do wish I’d at least covered the Kitaev surface code.

Crucially, these lecture notes don’t represent my effort alone. I worked from draft scribe notes prepared by the QIS II students, who did a far better job than I had any right to expect (including creating the beautiful figures). My wonderful course TA and PhD student Daniel Liang, along with students Ethan Tan, Samuel Ziegelbein, and Steven Han, then assembled everything, fixed numerous errors, and compiled the bibliography. I’m grateful to all of them. At the last minute, we had a LaTeX issue that none of us knew how to fix—but, in response to a plea, Shtetl-Optimized reader Pablo Cingolani generously volunteered to help, completed the work by the very next day (I’d imagined it taking a month!), and earned a fruit basket from me in gratitude.

Anyway, let me know of any mistakes you find! We’ll try to fix them.

Summer 2022 Quantum Supremacy Updates

Saturday, August 13th, 2022

Update: We’re now finalizing the lecture notes—basically, a textbook—for the brand-new Quantum Information Science II course that I taught this past spring! The notes will be freely shared on this blog. But the bibliographies for the various lectures need to be merged, and we don’t know how. Would any TeXpert like to help us, in exchange for a generous acknowledgment? A reader named Pablo Cingolani has now graciously taken care of this. Thanks so much, Pablo!


I returned last week from the NSF Workshop on Quantum Advantage and Next Steps at the University of Chicago. Thanks so much to Chicago CS professor (and my former summer student) Bill Fefferman and the other organizers for making this workshop a reality. Given how vividly I remember the situation 12 years ago, when the idea of sampling-based quantum supremacy was the weird obsession of me and a few others, it was particularly special to attend a workshop on the topic with well over a hundred participants, some in-person and some on Zoom, delayed by covid but still excited by the dramatic experimental progress of the past few years.

Of course there’s a lot still to do. Many of the talks drew an exclamation point on something I’ve been saying for the past couple years: that there’s an urgent need for better quantum supremacy experiments, which will require both theoretical and engineering advances. The experiments by Google and USTC and now Xanadu represent a big step forward for the field, but since they started being done, the classical spoofing attacks have also steadily improved, to the point that whether “quantum computational supremacy” still exists depends on exactly how you define it.

Briefly: if you measure by total operations, energy use, or CO2 footprint, then probably yes, quantum supremacy remains. But if you measure by number of seconds, then it doesn’t remain, not if you’re willing to shell out for enough cores on AWS or your favorite supercomputer. And even the quantum supremacy that does remain might eventually fall to, e.g., further improvements of the algorithm due to Gao et al. For more details, see, e.g., the now-published work of Pan, Chen, and Zhang, or this good popular summary by Adrian Cho for Science.

If the experimentalists care enough, they could easily regain the quantum lead, at least for a couple more years, by (say) repeating random circuit sampling with 72 qubits rather than 53-60, and hopefully circuit depth of 30-40 rather than just 20-25. But the point I made in my talk was that, as long as we remain in the paradigm of sampling experiments that take ~2n time to verify classically and also ~2n time to spoof classically (where n is the number of qubits), all quantum speedups that we can achieve will be fragile and contingent, however interesting and impressive. As soon as we go way beyond what classical computers can keep up with, we’ve also gone way beyond where classical computers can check what was done.

I argued that the only solution to this problem is to design new quantum supremacy experiments: ones where some secret has been inserted in the quantum circuit that lets a classical computer efficiently verify the results. The fundamental problem is that we need three properties—

  1. implementability on near-term quantum computers,
  2. efficient classical verifiability, and
  3. confidence that quantum computers have a theoretical asymptotic advantage,

and right now we only know how to get any two out of the three. We can get 1 and 2 with QAOA and various other heuristic quantum algorithms, 1 and 3 with BosonSampling and Random Circuit Sampling, or 2 and 3 with Shor’s algorithm or Yamakawa-Zhandry or recent interactive protocols. To get all three, there are three obvious approaches:

  1. Start with the heuristic algorithms and find a real advantage from them,
  2. Start with BosonSampling or Random Circuit Sampling or the like and figure out how to make them efficiently verifiable classically, or
  3. Start with protocols like Kahanamoku-Meyer et al.’s and figure out how to run them on near-term devices.

At the Chicago workshop, I’d say that the most popular approach speakers talked about was 3, although my own focus was more on 2.

Anyway, until a “best-of-all-worlds” quantum supremacy experiment is discovered, there’s plenty to do in the meantime: for example, better understand the classical hardness of spoofing Random Circuit Sampling with a constant or logarithmic number of gate layers. Understand the best that classical algorithms can do to spoof the Linear Cross-Entropy Benchmark for BosonSampling, and build on Grier et al. to understand the power of BosonSampling with a linear number of modes.

I’ll be flying back with my family to Austin today after seven weeks at the Jersey shore, but I’ll try to field any questions about the state of quantum supremacy in general or this workshop in particular in the comments!

Updatez

Tuesday, August 2nd, 2022
  1. On the IBM Qiskit blog, there’s an interview with me about the role of complexity theory in the early history of quantum computing. Not much new for regular readers, but I’m very happy with how it came out—thanks so much to Robert Davis and Olivia Lanes for making it happen! My only quibble is with the sketch of my face, which might create the inaccurate impression that I no longer have teeth.
  2. Boaz Barak pointed me to a Twitter thread of DALL-E paintings of people using quantum computers, in the styles of many of history’s famous artists. While the motifs are unsurprising (QCs look like regular computers but glowing, or maybe like giant glowing atoms), highly recommended as another demonstration of the sort of thing DALL-E does best.
  3. Dan Spielman asked me to announce that the National Academy of Sciences is seeking nominations for the Held Prize in combinatorial and discrete optimization. The deadline is October 3.
  4. I’m at the NSF Workshop on Quantum Advantage and Next Steps at the University of Chicago. My talk yesterday was entitled “Verifiable Quantum Advantage: What I Hope Will Be Done” (yeah yeah, I decided to call it “advantage” rather than “supremacy” in deference to the name of the workshop). My PowerPoint slides are here. Meanwhile, this morning was the BosonSampling session. The talk by Chaoyang Lu, leader of USTC’s experimental BosonSampling effort, was punctuated by numerous silly memes and videos, as well as the following striking sentence: “only by putting the seven dragon balls together can you unlock the true quantum computational power.”
  5. Gavin Leech lists and excerpts his favorite writings of mine over the past 25 years, while complaining that I spend “a lot of time rebutting fleeting manias” and “obsess[ing] over political flotsam.”

On black holes, holography, the Quantum Extended Church-Turing Thesis, fully homomorphic encryption, and brain uploading

Wednesday, July 27th, 2022

I promise you: this post is going to tell a scientifically coherent story that involves all five topics listed in the title. Not one can be omitted.

My story starts with a Zoom talk that the one and only Lenny Susskind delivered for the Simons Institute for Theory of Computing back in May. There followed a panel discussion involving Lenny, Edward Witten, Geoffrey Penington, Umesh Vazirani, and your humble shtetlmaster.

Lenny’s talk led up to a gedankenexperiment involving an observer, Alice, who bravely jumps into a specially-prepared black hole, in order to see the answer to a certain computational problem in her final seconds before being ripped to shreds near the singularity. Drawing on earlier work by Bouland, Fefferman, and Vazirani, Lenny speculated that the computational problem could be exponentially hard even for a (standard) quantum computer. Despite this, Lenny repeatedly insisted—indeed, he asked me again to stress here—that he was not claiming to violate the Quantum Extended Church-Turing Thesis (QECTT), the statement that all of nature can be efficiently simulated by a standard quantum computer. Instead, he was simply investigating how the QECTT needs to be formulated in order to be a true statement.

I didn’t understand this, to put it mildly. If what Lenny was saying was right—i.e., if the infalling observer could see the answer to a computational problem not in BQP, or Bounded-Error Quantum Polynomial-Time—then why shouldn’t we call that a violation of the QECTT? Just like we call Shor’s quantum factoring algorithm a likely violation of the classical Extended Church-Turing Thesis, the thesis saying that nature can be efficiently simulated by a classical computer? Granted, you don’t have to die in order to run Shor’s algorithm, as you do to run Lenny’s experiment. But why should such implementation details matter from the lofty heights of computational complexity?

Alas, not only did Lenny never answer that in a way that made sense to me, he kept trying to shift the focus from real, physical black holes to “silicon spheres” made of qubits, which would be programmed to simulate the process of Alice jumping into the black hole (in a dual boundary description). Say what? Granting that Lenny’s silicon spheres, being quantum computers under another name, could clearly be simulated in BQP, wouldn’t this still leave the question about the computational powers of observers who jump into actual black holes—i.e., the question that we presumably cared about in the first place?

Confusing me even further, Witten seemed almost dismissive of the idea that Lenny’s gedankenexperiment raised any new issue for the QECTT—that is, any issue that wouldn’t already have been present in a universe without gravity. But as to Witten’s reasons, the most I understood from his remarks was that he was worried about various “engineering” issues with implementing Lenny’s gedankenexperiment, involving gravitational backreaction and the like. Ed Witten, now suddenly the practical guy! I couldn’t even isolate the crux of disagreement between Susskind and Witten, since after all, they agreed (bizarrely, from my perspective) that the QECTT wasn’t violated. Why wasn’t it?

Anyway, shortly afterward I attended the 28th Solvay Conference in Brussels, where one of the central benefits I got—besides seeing friends after a long COVID absence and eating some amazing chocolate mousse—was a dramatically clearer understanding of the issues in Lenny’s gedankenexperiment. I owe this improved understanding to conversations with many people at Solvay, but above all Daniel Gottesman and Daniel Harlow. Lenny himself wasn’t there, other than in spirit, but I ran the Daniels’ picture by him afterwards and he assented to all of its essentials.

The Daniels’ picture is what I want to explain in this post. Needless to say, I take sole responsibility for any errors in my presentation, as I also take sole responsibility for not understanding (or rather: not doing the work to translate into terms that I understood) what Susskind and Witten had said to me before.


The first thing you need to understand about Lenny’s gedankenexperiment is that it takes place entirely in the context of AdS/CFT: the famous holographic duality between two types of physical theories that look wildly different. Here AdS stands for anti-de-Sitter: a quantum theory of gravity describing a D-dimensional universe with a negative cosmological constant (i.e. hyperbolic geometry), one where black holes can form and evaporate and so forth. Meanwhile, CFT stands for conformal field theory: a quantum field theory, with no apparent gravity (and hence no black holes), that lives on the (D-1)-dimensional boundary of the D-dimensional AdS space. The staggering claim of AdS/CFT is that every physical question about the AdS bulk can be translated into an equivalent question about the CFT boundary, and vice versa, with a one-to-one mapping from states to states and observables to observables. So in that sense, they’re actually the same theory, just viewed in two radically different ways. AdS/CFT originally came out of string theory, but then notoriously “swallowed its parent,” to the point where nowadays, if you go to what are still called “string theory” meetings, you’re liable to hear vastly more discussion of AdS/CFT than of actual strings.

Thankfully, the story I want to tell won’t depend on fine details of how AdS/CFT works. Nevertheless, you can’t just ignore the AdS/CFT part as some technicality, in order to get on with the vivid tale of Alice jumping into a black hole, hoping to learn the answer to a beyond-BQP computational problem in her final seconds of existence. The reason you can’t ignore it is that the whole beyond-BQP computational problem we’ll be talking about, involves the translation (or “dictionary”) between the AdS bulk and the CFT boundary. If you like, then, it’s actually the chasm between bulk and boundary that plays the starring role in this story. The more familiar chasm within the bulk, between the interior of a black hole and its exterior (the two separated by an event horizon), plays only a subsidiary role: that of causing the AdS/CFT dictionary to become exponentially complex, as far as anyone can tell.

Pause for a minute. Previously I led you to believe that we’d be talking about an actual observer Alice, jumping into an actual physical black hole, and whether Alice could see the answer to a problem that’s intractable even for quantum computers in her last moments before hitting the singularity, and if so whether we should take that to refute the Quantum Extended Church-Turing Thesis. What I’m saying now is so wildly at variance with that picture, that it had to be repeated to me about 10 times before I understood it. Once I did understand, I then had to repeat it to others about 10 times before they understood. And I don’t care if people ridicule me for that admission—how slow Scott and his friends must be, compared to string theorists!—because my only goal right now is to get you to understand it.

To say it again: Lenny has not proposed a way for Alice to surpass the complexity-theoretic power of quantum computers, even for a brief moment, by crossing the event horizon of a black hole. If that was Alice’s goal when she jumped into the black hole, then alas, she probably sacrificed her life for nothing! As far as anyone knows, Alice’s experiences, even after crossing the event horizon, ought to continue to be described extremely well by general relativity and quantum field theory (at least until she nears the singularity and dies), and therefore ought to be simulatable in BQP. Granted, we don’t actually know this—you can call it an open problem if you like—but it seems like a reasonable guess.

In that case, though, what beyond-BQP problem was Lenny talking about, and what does it have to do with black holes? Building on the Bouland-Fefferman-Vazirani paper, Lenny was interested in a class of problems of the following form: Alice is given as input a pure quantum state |ψ⟩, which encodes a boundary CFT state, which is dual to an AdS bulk universe that contains a black hole. Alice’s goal is, by examining |ψ⟩, to learn something about what’s inside the black hole. For example: does the black hole interior contain “shockwaves,” and if so how many and what kind? Does it contain a wormhole, connecting it to a different black hole in another universe? If so, what’s the volume of that wormhole? (Not the first question I would ask either, but bear with me.)

Now, when I say Alice is “given” the state |ψ⟩, this could mean several things: she could just be physically given a collection of n qubits. Or, she could be given a gigantic table of 2n amplitudes. Or, as a third possibility, she could be given a description of a quantum circuit that prepares |ψ⟩, say from the all-0 initial state |0n⟩. Each of these possibilities leads to a different complexity-theoretic picture, and the differences are extremely interesting to me, so that’s what I mostly focused on in my remarks in the panel discussion after Lenny’s talk. But it won’t matter much for the story I want to tell in this post.

However |ψ⟩ is given to Alice, the prediction of AdS/CFT is that |ψ⟩ encodes everything there is to know about the AdS bulk, including whatever is inside the black hole—but, and this is crucial, the information about what’s inside the black hole will be pseudorandomly scrambled. In other words, it works like this: whatever simple thing you’d like to know about parts of the bulk that aren’t hidden behind event horizons—is there a star over here? some gravitational lensing over there? etc.—it seems that you could not only learn it by measuring |ψ⟩, but learn it in polynomial time, the dictionary between bulk and boundary being computationally efficient in that case. (As with almost everything else in this subject, even that hasn’t been rigorously proven, though my postdoc Jason Pollack and I made some progress this past spring by proving a piece of it.) On the other hand, as soon as you want to know what’s inside an event horizon, the fact that there are no probes that an “observer at infinity” could apply to find out, seems to translate into the requisite measurements on |ψ⟩ being exponentially complex to apply. (Technically, you’d have to measure an ensemble of poly(n) identical copies of |ψ⟩, but I’ll ignore that in what follows.)

In more detail, the relevant part of |ψ⟩ turns into a pseudorandom, scrambled mess: a mess that it’s plausible that no polynomial-size quantum circuit could even distinguish from the maximally mixed state. So, while in principle the information is all there in |ψ⟩, getting it out seems as hard as various well-known problems in symmetric-key cryptography, if not literally NP-hard. This is way beyond what we expect even a quantum computer to be able to do efficiently: indeed, after 30 years of quantum algorithms research, the best quantum speedup we know for this sort of task is typically just the quadratic speedup from Grover’s algorithm.

So now you understand why there was some hope that Alice, by jumping into a black hole, could solve a problem that’s exponentially hard for quantum computers! Namely because, once she’s inside the black hole, she can just see the shockwaves, or the volume of the wormhole, or whatever, and no longer faces the exponentially hard task of decoding that information from |ψ⟩. It’s as if the black hole has solved the problem for her, by physically instantiating the otherwise exponentially complex transformation between the bulk and boundary descriptions of |ψ⟩.

Having now gotten your hopes up, the next step in the story is to destroy them.


Here’s the fundamental problem: |ψ⟩ does not represent the CFT dual of a bulk universe that contains the black hole with the shockwaves or whatever, and that also contains Alice herself, floating outside the black hole, and being given |ψ⟩ as an input.  Indeed, it’s unclear what the latter state would even mean: how do we get around the circularity in its definition? How do we avoid an infinite regress, where |ψ⟩ would have to encode a copy of |ψ⟩ which would have to encode a copy of … and so on forever? Furthermore, who created this |ψ⟩ to give to Alice? We don’t normally imagine that an “input state” contains a complete description of the body and brain of the person whose job it is to learn the output.

By contrast, a scenario that we can define without circularity is this: Alice is given (via physical qubits, a giant table of amplitudes, an obfuscated quantum circuit, or whatever) a pure quantum state |ψ⟩, which represents the CFT dual of a hypothetical universe containing a black hole.  Alice wants to learn what shockwaves or wormholes are inside the black hole, a problem plausibly conjectured not to have any ordinary polynomial-size quantum circuit that takes copies of |ψ⟩ as input.  To “solve” the problem, Alice sets into motion the following sequence of events:

  1. Alice scans and uploads her own brain into a quantum computer, presumably destroying the original meat brain in the process! The QC represents Alice, who now exists only virtually, via a state |φ⟩.
  2. The QC performs entangling operations on |φ⟩ and |ψ⟩, which correspond to inserting Alice into the bulk of the universe described by |ψ⟩, and then having her fall into the black hole.
  3. Now in simulated form, “Alice” (or so we assume, depending on our philosophical position) has the subjective experience of falling into the black hole and observing what’s inside.  Success! Given |ψ⟩ as input, we’ve now caused “Alice” (for some definition of “Alice”) to have observed the answer to the beyond-BQP computational problem.

In the panel discussion, I now model Susskind as having proposed scenario 1-3, Witten as going along with 1-2 but rejecting 3 or not wanting to discuss it, and me as having made valid points about the computational complexity of simulating Alice’s experience in 1-3, yet while being radically mistaken about what the scenario was (I still thought an actual black hole was involved).

An obvious question is whether, having learned the answer, “Alice” can now get the answer back out to the “real, original” world. Alas, the expectation is that this would require exponential time. Why? Because otherwise, this whole process would’ve constituted a subexponential-time algorithm for distinguishing random from pseudorandom states using an “ordinary” quantum computer! Which is conjectured not to exist.

And what about Alice herself? In polynomial time, could she return from “the Matrix,” back to a real-world biological body? Sure she could, in principle—if, for example, the entire quantum computation were run in reverse. But notice that reversing the computation would also make Alice forget the answer to the problem! Which is not at all a coincidence: if the problem is outside BQP, then in general, Alice can know the answer only while she’s “inside the Matrix.”

Now that hopefully everything is crystal-clear and we’re all on the same page, what can we say about this scenario?  In particular: should it cause us to reject or modify the QECTT itself?


Daniel Gottesman, I thought, offered a brilliant reductio ad absurdum of the view that the simulated black hole scenario should count as a refutation of the QECTT. Well, he didn’t call it a “reductio,” but I will.

For the reductio, let’s forget not only about quantum gravity but even about quantum mechanics itself, and go all the way back to classical computer science.  A fully homomorphic encryption scheme, the first example of which was discovered by Craig Gentry 15 years ago, lets you do arbitrary computations on encrypted data without ever needing to decrypt it.  It has both an encryption key, for encrypting the original plaintext data, and a separate decryption key, for decrypting the final answer.

Now suppose Alice has some homomorphically encrypted top-secret emails, which she’d like to read.  She has the encryption key (which is public), but not the decryption key.

If the homomorphic encryption scheme is secure against quantum computers—as the schemes discovered by Gentry and later researchers currently appear to be—and if the QECTT is true, then Alice’s goal is obviously infeasible: decrypting the data will take her exponential time.

Now, however, a classical version of Lenny comes along, and explains to Alice that she simply needs to do the following:

  1. Upload her own brain state into a classical computer, destroying the “meat” version in the process (who needed it?).
  2. Using the known encryption key, homomorphically encrypt a computer program that simulates (and thereby, we presume, enacts) Alice’s consciousness.
  3. Using the homomorphically encrypted Alice-brain, together with the homomorphically encrypted input data, do the homomorphic computations that simulate the process of Alice’s brain reading the top-secret emails.

The claim would now be that, inside the homomorphic encryption, the simulated Alice has the subjective experience of reading the emails in the clear.  Aha, therefore she “broke” the homomorphic encryption scheme! Therefore, assuming that the scheme was secure even against quantum computers, the QECTT must be false!

According to Gottesman, this is almost perfectly analogous to Lenny’s black hole scenario.  In particular, they share the property that “encryption is easy but decryption is hard.”   Once she’s uploaded her brain, Alice can efficiently enter the homomorphically encrypted world to see the solution to a hard problem, just like she can efficiently enter the black hole world to do the same.  In both cases, however, getting back to her normal world with the answer would then take Alice exponential time.  Note that in the latter case, the difficulty is not so much about “escaping from a black hole,” as it is about inverting the AdS/CFT dictionary.

Going further, we can regard the AdS/CFT dictionary for regions behind event horizons as, itself, an example of a fully homomorphic encryption scheme—in this case, of course, one where the ciphertexts are quantum states.  This strikes me as potentially an important insight about AdS/CFT itself, even if that wasn’t Gottesman’s intention. It complements many other recent connections between AdS/CFT and theoretical computer science, including the view of AdS/CFT as a quantum error-correcting code, and the connection between AdS/CFT and the Max-Flow/Min-Cut Theorem (see also my talk about my work with Jason Pollack).

So where’s the reductio?  Well, when it’s put so starkly, I suspect that not many would regard Gottesman’s classical homomorphic encryption scenario as a “real” challenge to the QECTT.  Or rather, people might say: yes, this raises fascinating questions for the philosophy of mind, but at any rate, we’re no longer talking about physics.  Unlike with (say) quantum computing, no new physical phenomenon is being brought to light that lets an otherwise intractable computational problem be solved.  Instead, it’s all about the user herself, about Alice, and which physical systems get to count as instantiating her.

It’s like, imagine Alice at the computer store, weighing which laptop to buy. Besides weight, battery life, and price, she definitely does care about processing power. She might even consider a quantum computer, if one is available. Maybe even a computer with a black hole, wormhole, or closed timelike curve inside: as long as it gives the answers she wants, what does she care about the innards? But a computer whose normal functioning would (pessimistically) kill her or (optimistically) radically change her own nature, trapping her in a simulated universe that she can escape only by forgetting the computer’s output? Yeah, I don’t envy the computer salesman.

Anyway, if we’re going to say this about the homomorphic encryption scenario, then shouldn’t we say the same about the simulated black hole scenario?  Again, from an “external” perspective, all that’s happening is a giant BQP computation.  Anything beyond BQP that we consider to be happening, depends on adopting the standpoint of an observer who “jumps into the homomorphic encryption on the CFT boundary”—at which point, it would seem, we’re no longer talking about physics but about philosophy of mind.


So, that was the story! I promised you that it would integrally involve black holes, holography, the Quantum Extended Church-Turing Thesis, fully homomorphic encryption, and brain uploading, and I hope to have delivered on my promise.

Of course, while this blog post has forever cleared up all philosophical confusions about AdS/CFT and the Quantum Extended Church-Turing Thesis, many questions of a more technical nature remain. For example: what about the original scenario? can we argue that the experiences of bulk observers can be simulated in BQP, even when those observers jump into black holes? Also, what can we say about the complexity class of problems to which the simulated Alice can learn the answers? Could she even solve NP-complete problems in polynomial time this way, or at least invert one-way functions? More broadly, what’s the power of “BQP with an oracle for applying the AdS/CFT dictionary”—once or multiple times, in one direction or both directions?

Lenny himself described his gedankenexperiment as exploring the power of a new complexity class that he called “JI/poly,” where the JI stands for “Jumping In” (to a black hole, that is). The nomenclature is transparently ridiculous—“/poly” means “with polynomial-size advice,” which we’re not talking about here—and I’ve argued in this post that the “JI” is rather misleading as well. If Alice is “jumping” anywhere, it’s not into a black hole per se, but into a quantum computer that simulates a CFT that’s dual to a bulk universe containing a black hole.

In a broader sense, though, to contemplate these questions at all is clearly to “jump in” to … something. It’s old hat by now that one can start in physics and end up in philosophy: what else is the quantum measurement problem, or the Boltzmann brain problem, or anthropic cosmological puzzles like whether (all else equal) we’re a hundred times as likely to find ourselves in a universe with a hundred times as many observers? More recently, it’s also become commonplace that one can start in physics and end in computational complexity theory: quantum computing itself is the example par excellence, but over the past decade, the Harlow-Hayden argument about decoding Hawking radiation and the complexity = action proposal have made clear that it can happen even in quantum gravity.

Lenny’s new gedankenexperiment, however, is the first case I’ve seen where you start out in physics, and end up embroiled in some of the hardest questions of philosophy of mind and computational complexity theory simultaneously.