Archive for the ‘Complexity’ Category

Quantum supremacy via BosonSampling

Thursday, December 3rd, 2020

A group led by Jianwei Pan, based mainly in Hefei and Shanghai, announced today that it achieved BosonSampling with 40-70 detected photons—up to and beyond the limit where a classical supercomputer could feasibly verify the results. For more, see also Emily Conover’s piece in Science News, or Daniel Garisto’s in Scientific American, both of which I consulted on. (Full disclosure: I was one of the reviewers for the Pan group’s Science paper, and will be writing the Perspective article to accompany it.)

The new result follows the announcement of 14-photon BosonSampling by the same group a year ago. It represents the second time quantum supremacy has been reported, following Google’s celebrated announcement from last year, and the first time it’s been done using photonics rather than superconducting qubits.

As the co-inventor of BosonSampling (with Alex Arkhipov), obviously I’m gratified about this.

For anyone who regards it as boring or obvious, here and here is Gil Kalai, on this blog, telling me why BosonSampling would never scale beyond 8-10 photons. (He wrote that, if aliens forced us to try, then much like with the Ramsey number R(6,6), our only hope would be to attack the aliens.) Here’s Kalai making a similar prediction, on the impossibility of quantum supremacy by BosonSampling or any other means, in his plenary address to the International Congress of Mathematicians two years ago.

Even if we set aside the quantum computing skeptics, many distinguished colleagues told me they thought experimental BosonSampling was a dead end, because of photon losses and the staggering difficulty of synchronizing 50-100 single-photon sources. They said a convincing demonstration of quantum supremacy would have to await the arrival of quantum fault-tolerance—or at any rate, some hardware platform more robust than photonics. I always agreed that they might be right. Furthermore, even if 50-photon BosonSampling was possible, after Google reached the supremacy milestone first with superconducting qubits, it wasn’t clear that anyone was going to bother.

Obviously the new result isn’t dispositive. Maybe Gil Kalai really WON, BY A LOT, before Hugo Chávez hacked the Pan group’s computers to make it seem otherwise? (Sorry, couldn’t resist.) Nevertheless, as someone whose intellectual origins are close to pure math, it’s strange and exciting to find myself in a field where, once in a while, the world itself gets to weigh in on a theoretical disagreement.

Since excitement is best when paired with accurate understanding, please help yourself to the following FAQ, which I might add more to over the next couple days.

What is BosonSampling? You must be new here! In increasing order of difficulty, here’s an MIT News article from back in 2011, here’s the Wikipedia page, here are my PowerPoint slides, here are my lecture notes from Rio de Janeiro, here’s my original paper with Arkhipov…

What is quantum supremacy? Roughly, the use of a programmable or configurable quantum computer to solve some well-defined computational problem much faster than we know how to solve it with any existing classical computer. “Quantum supremacy,” a term coined by John Preskill in 2012, does not mean useful QC, or universal QC, or scalable QC, or fault-tolerant QC, all of which remain outstanding challenges. For more, see my Supreme Quantum Supremacy FAQ, or (e.g.) my recent Lytle Lecture for the University of Washington.

If Google already announced quantum supremacy a year ago, what’s the point of this new experiment? To me, at least, quantum supremacy seems important enough to do at least twice! Also, as I said, this represents the first demonstration that quantum supremacy is possible via photonics. Finally, as the authors point out, the new experiment has one big technical advantage over Google’s: namely, many more possible output states (~1030 of them, rather than a mere ~9 quadrillion). This makes it infeasible to calculate the whole probability distribution over outputs and store it on a gigantic hard disk (after which one could easily generate as many samples as one wanted), which is what IBM proposed doing in its response to Google’s announcement.

Is BosonSampling a form of universal quantum computing? No, we don’t even think it can simulate universal classical computing! It’s designed for exactly one task: namely, demonstrating quantum supremacy and refuting Gil Kalai. It might have some other applications besides that, but if so, they’ll be icing on the cake. This is in contrast to Google’s Sycamore processor, which in principle is a universal quantum computer, just with a severe limit on the number of qubits (53) and how many layers of gates one can apply to them (about 20).

Is BosonSampling at least a step toward universal quantum computing? I think so! In 2000, Knill, Laflamme, and Milburn (KLM) famously showed that pure, non-interacting photons, passing through a network of beamsplitters, are capable of universal QC, provided we assume one extra thing: namely, the ability to measure the photons at intermediate times, and change which beamsplitters to apply to the remaining photons depending on the outcome. In other words, “BosonSampling plus adaptive measurements equals universality.” Basically, KLM is the holy grail that Pan’s and other experimental optics groups around the world have been working toward for 20 years, with BosonSampling just a more achievable pit stop along the way.

Are there any applications of BosonSampling? We don’t know yet. There are proposals in the literature to apply BosonSampling to quantum chemistry and other fields, but I’m not yet convinced that these proposals will yield real speedups over the best we can do with classical computers, for any task of practical interest that involves estimating specific numbers (as opposed to sampling tasks, where BosonSampling almost certainly does yield exponential speedups, but which are rarely the thing practitioners directly care about).

How hard is it to simulate BosonSampling on a classical computer? As far as we know today, the difficulty of simulating a “generic” BosonSampling experiment increases roughly like 2n, where n is the number of detected photons. It might be easier than that, particularly when noise and imperfections are taken into account; and at any rate it might be easier to spoof the statistical tests that one applies to verify the outputs. I and others managed to give some theoretical evidence against those possibilities, but just like with Google’s experiment, it’s conceivable that some future breakthrough will change the outlook and remove the case for quantum supremacy.

Do you have any amusing stories? When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there. This was by far the most expensive referee report I ever wrote!

Also: when Covid first started, and facemasks were plentiful in China but almost impossible to get in the US, Chaoyang Lu, one of the authors of the new work and my sometime correspondent on the theory of BosonSampling, decided to mail me a box of 200 masks (I didn’t ask for it). I don’t think that influenced my later review, but was appreciated nonetheless.

Huge congratulations to the whole team for their accomplishment!

Happy Thanksgiving Y’All!

Wednesday, November 25th, 2020

While a lot of pain is still ahead, this year I’m thankful that a dark chapter in American history might be finally drawing to a close. I’m thankful that the mRNA vaccines actually work. I’m thankful that my family has remained safe, and I’m thankful for all the essential workers who’ve kept our civilization running.

A few things:

  1. Friend-of-the-blog Jelani Nelson asked me to advertise an important questionnaire for theoretical computer scientists, about what the future of STOC and FOCS should look like (for example, should they become all virtual?). It only takes 2 or 3 minutes to fill out (I just did).
  2. Here’s a podcast that I recently did with UT Austin undergraduate Dwarkesh Patel. (As usual, I recommend 2x speed to compensate for my verbal tics.)
  3. Feel free to use the comments on this post to talk about recent progress in quantum computing or computational complexity! Like, I dunno, a (sub)exponential black-box speedup for the adiabatic algorithm, or anti-concentration for log-depth random quantum circuits, or an improved shadow tomography procedure, or a quantum algorithm for nonlinear differential equations, or a barrier to proving strong 3-party parallel repetition, or equivalence of one-way functions and time-bounded Kolmogorov complexity, or turning any hard-on-average NP problem into one that’s guaranteed to have solutions.
  4. It’s funny how quantum computing, P vs. NP, and so forth can come to feel like just an utterly mundane day job, not something anyone outside a small circle could possibly want to talk about while the fate of civilization hangs in the balance. Sometimes it takes my readers to remind me that not only are these topics what brought most of you here in the first place, they’re also awesome! So, I’ll mark that down as one more thing to be thankful for.

The Complexity Zoo needs a new home

Thursday, November 12th, 2020

Update (Nov. 14): I now have a deluge of serious hosting offers—thank you so much, everyone! No need for more.

Since I’m now feeling better that the first authoritarian coup attempt in US history will probably sort itself out OK, here’s a real problem:

Nearly a score years ago, I created the Complexity Zoo, a procrastination project turned encyclopedia of complexity classes. Nearly half a score years ago, the Zoo moved to my former employer, the Institute for Quantum Computing in Waterloo, Canada, which graciously hosted it ever since. Alas, IQC has decided that it can no longer do so. The reason is new regulations in Ontario about the accessibility of websites, which the Zoo might be out of compliance with. My students and I were willing to look into what was needed—like, does the polynomial hierarchy need ramps between its levels or something? The best would be if we heard from actual blind or other disabled complexity enthusiasts about how we could improve their experience, rather than trying to parse bureaucratese from the Ontario government. But IQC informed us that in any case, they can’t deal with the potential liability and their decision is final. I thank them for hosting the Zoo for eight years.

Now I’m looking for a volunteer for a new host. The Zoo runs on the MediaWiki platform, which doesn’t work with my own hosting provider (Bluehost) but is apparently easy to set up if you, unlike me, are the kind of person who can do such things. The IQC folks kindly offered to help with the transfer; I and my students can help as well. It’s a small site with modest traffic. The main things I need are just assurances that you can host the site for a long time (“forever” or thereabouts), and that you or someone else in your organization will be reachable if the site goes down or if there are other problems. I own the domain and can redirect from there.

In return, you’ll get the immense prestige of hosting such a storied resource for theoretical computer science … plus free publicity for your cause or organization on Shtetl-Optimized, and the eternal gratitude of thousands of my readers.

Of course, if you’re into complexity theory, and you want to update or improve the Zoo while you’re at it, then so much the awesomer! It could use some updates, badly. But you don’t even need to know P from NP.

If you’re interested, leave a comment or shoot me an email. Thanks!!

Unrelated Announcement: I’ll once again be doing an Ask-Me-Anything session at the Q2B (“Quantum to Business”) conference, December 8-10. Other speakers include Umesh Vazirani, John Preskill, Jennifer Ouellette, Eric Schmidt, and many others. Since the conference will of course be virtual this year, registration is a lot cheaper than in previous years. Check it out! (Full disclosure: Q2B is organized by QC Ware, Inc., for which I’m a scientific advisor.)

My second podcast with Lex Fridman

Monday, October 12th, 2020

Here it is—enjoy! (I strongly recommend listening at 2x speed.)

We recorded it a month ago—outdoors (for obvious covid reasons), on a covered balcony in Austin, as it drizzled all around us. Topics included:

  • Whether the universe is a simulation
  • Eugene Goostman, GPT-3, the Turing Test, and consciousness
  • Why I disagree with Integrated Information Theory
  • Why I disagree with Penrose’s ideas about physics and the mind
  • Intro to complexity theory, including P, NP, PSPACE, BQP, and SZK
  • The US’s catastrophic failure on covid
  • The importance of the election
  • My objections to cancel culture
  • The role of love in my life (!)

Thanks so much to Lex for his characteristically probing questions, apologies as always for my verbal tics, and here’s our first podcast for those who missed that one.

My Utility+ podcast with Matthew Putman

Thursday, September 3rd, 2020

Another Update (Sep. 15): Sorry for the long delay; new post coming soon! To tide you over—or just to distract you from the darkness figuratively and literally engulfing our civilization—here’s a Fortune article about today’s announcement by IBM of its plans for the next few years in superconducting quantum computing, with some remarks from yours truly.

Another Update (Sep. 8): A reader wrote to let me know about a fundraiser for Denys Smirnov, a 2015 IMO gold medalist from Ukraine who needs an expensive bone marrow transplant to survive Hodgkin’s lymphoma. I just donated and I hope you’ll consider it too!

Update (Sep. 5): Here’s another quantum computing podcast I did, “Dunc Tank” with Duncan Gammie. Enjoy!

Thanks so much to Shtetl-Optimized readers, so far we’ve raised $1,371 for the Biden-Harris campaign and $225 for the Lincoln Project, which I intend to match for $3,192 total. If you’d like to donate by tonight (Thursday night), there’s still $404 to go!

Meanwhile, a mere three days after declaring my “new motto,” I’ve come up with a new new motto for this blog, hopefully a more cheerful one:

When civilization seems on the brink of collapse, sometimes there’s nothing left to talk about but maximal separations between randomized and quantum query complexity.

On that note, please enjoy my new one-hour podcast on Spotify (if that link doesn’t work, try this one) with Matthew Putman of Utility+. Alas, my umming and ahhing were more frequent than I now aim for, but that’s partly compensated for by Matthew’s excellent decision to speed up the audio. This was an unusually wide-ranging interview, covering everything from SlateStarCodex to quantum gravity to interdisciplinary conferences to the challenges of teaching quantum computing to 7-year-olds. I hope you like it!

The Busy Beaver Frontier

Thursday, July 23rd, 2020

Update (July 27): I now have a substantially revised and expanded version (now revised and expanded even a second time), which incorporates (among other things) the extensive feedback that I got from this blog post. There are new philosophical remarks, some lovely new open problems, and an even-faster-growing (!) integer sequence. Check it out!

Another Update (August 13): Nick Drozd now has a really nice blog post about his investigations of my Beeping Busy Beaver (BBB) function.

A life that was all covid, cancellations, and Trump, all desperate rearguard defense of the beleaguered ideals of the Enlightenment, would hardly be worth living. So it was an exquisite delight, these past two weeks, to forget current events and write an 18-page survey article about the Busy Beaver function: the staggeringly quickly-growing function that probably encodes a huge portion of all interesting mathematical truth in its first hundred values, if only we could know those values or exploit them if we did.

Without further ado, here’s the title, abstract, and link:

The Busy Beaver Frontier
by Scott Aaronson

The Busy Beaver function, with its incomprehensibly rapid growth, has captivated generations of computer scientists, mathematicians, and hobbyists. In this survey, I offer a personal view of the BB function 58 years after its introduction, emphasizing lesser-known insights, recent progress, and especially favorite open problems. Examples of such problems include: when does the BB function first exceed the Ackermann function? Is the value of BB(20) independent of set theory? Can we prove that BB(n+1)>2BB(n) for large enough n? Given BB(n), how many advice bits are needed to compute BB(n+1)? Do all Busy Beavers halt on all inputs, not just the 0 input? Is it decidable whether BB(n) is even or odd?

The article is slated to appear soon in SIGACT News. I’m grateful to Bill Gasarch for suggesting it—even with everything else going on, this was a commission I felt I couldn’t turn down!

Besides Bill, I’m grateful to the various Busy Beaver experts who answered my inquiries, to Marijn Heule and Andy Drucker for suggesting some of the open problems, to Marijn for creating a figure, and to Lily, my 7-year-old daughter, for raising the question about the first value of n at which the Busy Beaver function exceeds the Ackermann function. (Yes, Lily’s covid homeschooling has included multiple lessons on very large positive integers.)

There are still a few days until I have to deliver the final version. So if you spot anything wrong or in need of improvement, don’t hesitate to leave a comment or send an email. Thanks in advance!

Of course Busy Beaver has been an obsession that I’ve returned to many times in my life: for example, in that Who Can Name the Bigger Number? essay that I wrote way back when I was 18, in Quantum Computing Since Democritus, in my public lecture at Festivaletteratura, and in my 2016 paper with Adam Yedidia that showed that the values of all Busy Beaver numbers beyond the 7910th are independent of the axioms of set theory (Stefan O’Rear has since shown that independence starts at the 748th value or sooner). This survey, however, represents the first time I’ve tried to take stock of BusyBeaverology as a research topic—collecting in one place all the lesser-known theorems and empirical observations and open problems that I found the most striking, in the hope of inspiring not just contemplation or wonderment but actual progress.

Within the last few months, the world of deep mathematics that you can actually explain to a child lost two of its greatest giants: John Conway (who died of covid, and who I eulogized here) and Ron Graham. One thing I found poignant, and that I didn’t know before I started writing, is that Conway and Graham both play significant roles in the story of the Busy Beaver function. Conway, because most of the best known candidates for Busy Beaver Turing machines turn out, when you analyze them, to be testing variants of the notorious Collatz Conjecture—and Conway is the one who proved, in 1972, that the set of “Collatz-like questions” is Turing-undecidable. And Graham because of Graham’s number from Ramsey theory—a candidate for the biggest number that’s ever played a role in mathematical research—and because of the discovery, four years ago, that the 18th Busy Beaver number exceeds Graham’s number.

(“Just how big is Graham’s number? So big that the 17th Busy Beaver number is not yet known to exceed it!”)

Anyway, I tried to make the survey pretty accessible, while still providing enough technical content to sink one’s two overgrown front teeth into (don’t worry, there are no such puns in the piece itself). I hope you like reading it at least 1/BB(10) as much as I liked writing it.

Update (July 24): Longtime commenter Joshua Zelinsky gently reminded me that one of the main questions discussed in the survey—namely, whether we can prove BB(n+1)>2BB(n) for all large enough n—was first brought to my attention by him, Joshua, in a 2013 Ask-Me-Anything session on this blog! I apologize to Joshua for the major oversight, which has now been corrected. On the positive side, we just got a powerful demonstration both of the intellectual benefits of blogging, and of the benefits of sharing paper drafts on one’s blog before sending them to the editor!

Is this blog obsolete?

Monday, July 20th, 2020

Longtime friend-of-the-blog Michael Nielsen, formerly a quantum computing theorist and now an independent thinker about science and AI and learning, posted the following exchange with the astounding GPT-3 text generator / chatbot:

Q: Is it possible to use quantum computers to solve NP-complete problems in polynomial time?
A: It is not possible to use quantum computers to solve NP-complete problems in polynomial time.

Q: Why not?
A: Because NP-complete problems are not in BQP.

Q: You’ve just restated your earlier assertion! What I’d like is to understand the underlying reason why not!
A: I’m sorry, but I don’t have time to explain the underlying reason why not.

Q: No offence, but this seems really bogus. I don’t believe you understand the reason why!
A: I’m sorry, but I don’t have time to explain the underlying reason why not.

(And then there are a few more rounds of the AI not having time to “explain the underlying reason why not.”)

This is, like, 80% of the way to replacing Shtetl-Optimized!

For much more discussion of GPT-3 and its implications, and samples of its output, see for example the SSC subreddit. At the moment, as far as I can tell, the closest a person off the street can easily come to experimenting with GPT-3 themselves is using a website called AI Dungeon.

And yes, as many have already remarked, this is clearly the MITS Altair of text-generating AI, an amusing toy that’s also the start of something that will change the world.

Quantum Computing Since Democritus: New Foreword!

Saturday, June 20th, 2020

Time for a non-depressing post. Quantum Computing Since Democritus, which is already available in English and Russian, is about to be published in both Chinese and Japanese. (So if you read this blog, but have avoided tackling QCSD because your Chinese or Japanese is better than your English, today’s your day!) To go along with the new editions, Cambridge University Press asked me to write a new foreword, reflecting on what happened in the seven years since the book was published. The editor, Paul Dobson, kindly gave me permission to share the new foreword on my blog. So without further ado…

Quantum Computing Since Democritus began its life as a course that I taught at the University of Waterloo in 2006.  Seven years later, it became the book that you now hold.  Its preface ended with the following words:

Here’s hoping that, in 2020, this book will be as badly in need of revision as the 2006 lecture notes were in 2013.

As I write this, in June 2020, a lot has happened that I would never have predicted in 2013.  Donald Trump is the President of the United States, and is up for reelection shortly.  This is not a political book, so let me resist the urge to comment further.  Meanwhile, the coronavirus pandemic is ravaging the world, killing hundreds of thousands of people, crashing economies, and shutting down schools and universities (including mine).  And in the past few weeks, protests against racism and police brutality started in America and then spread to the world, despite the danger of protesting during a pandemic.

Leaving aside the state of the world, my own life is also very different than it was seven years ago.  Along with my family, I’ve moved from MIT to the University of Texas in Austin.  My daughter, who was born at almost exactly the same time as Quantum Computing Since Democritus, is now a first-grader, and is joined by a 3-year-old son.  When my daughter’s school shut down due to the coronavirus, I began home-schooling her in math, computer science, and physics—in some of the exact same topics covered in this book.  I’m now engaged in an experiment to see what portion of this material can be made accessible to a 7-year-old.

But what about the material itself?  How has it held up over seven years?  Both the bad news and the (for you) good news, I suppose, is that it’s not particularly out of date.  The intellectual underpinnings of quantum computing and its surrounding disciplines remain largely as they were.  Still, let me discuss what has changed.

Between 2013 and 2020, the field of quantum computing made a striking transition, from a mostly academic pursuit to a major technological arms race.  The Chinese government, the US government, and the European Union have all pledged billions of dollars for quantum computing research.  Google, Microsoft, IBM, Amazon, Alibaba, Intel, and Honeywell also now all have well-funded groups tasked with building quantum computers, or providing quantum-computing-related software and services, or even just doing classical computing that’s “quantum-inspired.”  These giants are joined by dozens of startups focused entirely on quantum computing.

The new efforts vary greatly in caliber; some efforts seem rooted in visions of what quantum computers will be able to help with, and how soon, that I find to be wildly overoptimistic or even irresponsible.  But perhaps it’s always this way when a new technology moves from an intellectual aspiration to a commercial prospect.  Having joined the field around 1999, before there were any commercial efforts in quantum computing, I’ve found the change disorienting.

But while some of the new excitement is based on pure hype—on marketers now mixing some “quantum” into their word-salad of “blockchain,” “deep learning,” etc., with no particular understanding of any of the ingredients—there really have been some scientific advances in quantum computing since 2013, a fire underneath the smoke.

Surely the crowning achievement of quantum computing during this period was the achievement of “quantum supremacy,” which a team at Google announced in the fall of 2019.  For the first time, a programmable quantum computer was used to outperform any classical computer on earth, running any currently known algorithm.  Google’s device, called “Sycamore,” with 53 superconducting qubits cooled to a hundredth of a degree above absolute zero, solved a well-defined albeit probably useless sampling problem in about 3 minutes.  To compare, current state-of-the-art simulations on classical computers need a few days, even with hundreds of thousands of parallel processors.  Ah, but will a better classical simulation be possible?  That’s an open question in quantum complexity!  The discussion of that question draws on theoretical work that various colleagues and I did over the past decade.  That work in turn draws on my so-called PostBQP=PP theorem from 2004, explained in this book.

In the past seven years, there were also several breakthroughs in quantum computing theory—some of which resolved open problems mentioned in this book. 

In 2018, Ran Raz and Avishay Tal gave an oracle relative to which BQP (Bounded-Error Quantum Polynomial-Time) is not contained in PH (the Polynomial Hierarchy).  This solved one of the main open questions, since 1993, about where BQP fits in with classical complexity classes, at least in the black-box setting.  (What does that mean?  Read the book!)  Raz and Tal’s proof used a candidate problem that I had defined in 2009 and called “Forrelation.”

Also in 2018, Urmila Mahadev gave a protocol, based on cryptography, by which a polynomial-time quantum computer (i.e., a BQP machine) could always prove the results of its computation to a classical polynomial-time skeptic, purely by exchanging classical messages with the skeptic.  Following Urmila’s achievement, I was delighted to give her a $25 prize for solving the problem that I’d announced on my blog back in 2007.

Perhaps most spectacularly of all, in 2020, Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen proved that MIP*=RE.  Here MIP* means the class of problems solvable using multi-prover interactive proof systems with quantumly entangled provers (and classical polynomial-time verifiers), while RE means Recursively Enumerable: a class that includes not only all the computable problems, but even the infamous halting problem (!).  To say it more simply, entangled provers can convince a polynomial-time verifier that an arbitrary Turing machine halts.  Besides its intrinsic interest, a byproduct of this breakthrough was to answer a decades-old question in pure math, the so-called Connes Embedding Conjecture (by refuting the conjecture).  To my knowledge, the new result represents the first time that quantum computing has reached “all the way up the ladder of hardness” to touch uncomputable problems.  It’s also the first time that non-relativizing techniques, like the ones central to the study of interactive proofs, were ever used in computability theory.

In a different direction, the last seven years have witnessed an astonishing convergence between quantum information and quantum gravity—something that was just starting when Quantum Computing Since Democritus appeared in 2013, and that I mentioned as an exciting new direction.  Since then, the so-called “It from Qubit” collaboration has brought together quantum computing theorists with string theorists and former string theorists—experts in things like the black hole information problem—to develop a shared language.  One striking proposal that’s emerged from this is a fundamental role for quantum circuit complexity—that is, the smallest number of 1- and 2-qubit gates needed to prepare a given n-qubit state from the all-0 state—in the so-called AdS/CFT (Anti de Sitter / Conformal Field Theory) correspondence.  AdS/CFT is a duality between physical theories involving different numbers of spatial dimensions; for more than twenty years, it’s been a central testbed for ideas about quantum gravity.  But the duality is extremely nonlocal: a “simple” quantity in the AdS theory, like the volume of a wormhole, can correspond to an incredibly “complicated” quantity in the dual CFT.  The new proposal is that the CFT quantity might be not just complicated, but literally circuit complexity itself.  Fanciful as that sounds, the truth is that no one has come up with any other proposal that passes the same sanity checks.  A related new insight is that the nonlocal mapping between the AdS and CFT theories is not merely analogous to, but literally an example of, a quantum error-correcting code: the same mathematical objects that will be needed to build scalable quantum computers.

When Quantum Computing Since Democritus was first published, some people thought it went too far in elevating computer science, and computational complexity in particular, to fundamental roles in understanding the physical world.  But even I wasn’t audacious enough to posit connections like the ones above, which are now more-or-less mainstream in quantum gravity research.

I’m proud that I wrote Quantum Computing Since Democritus, but as the years go by, I find that I have no particular desire to revise it, or even reread it.  It seems far better for the book to stand as a record of what I knew and believed and cared about at a certain moment in time.

The intellectual quest that’s defined my life—the quest to wrap together computation, physics, math, and philosophy into some sort of coherent picture of the world—might never end.  But it does need to start somewhere.  I’m honored that you chose Quantum Computing Since Democritus as a place to start or continue your own quest.  I hope you enjoy it.

Scott Aaronson
Austin, Texas
June 2020

The US might die, but P and PSPACE are forever

Monday, June 1st, 2020

Today, I interrupt the news of the rapid disintegration of the United States of America, on every possible front at once (medical, economic, social…), to bring you something far more important: a long-planned two-hour podcast, where theoretical physicist and longtime friend-of-the-blog Sean Carroll interviews yours truly about complexity theory! Here’s Sean’s description of this historic event:

There are some problems for which it’s very hard to find the answer, but very easy to check the answer if someone gives it to you. At least, we think there are such problems; whether or not they really exist is the famous P vs NP problem, and actually proving it will win you a million dollars. This kind of question falls under the rubric of “computational complexity theory,” which formalizes how hard it is to computationally attack a well-posed problem. Scott Aaronson is one of the world’s leading thinkers in computational complexity, especially the wrinkles that enter once we consider quantum computers as well as classical ones. We talk about how we quantify complexity, and how that relates to ideas as disparate as creativity, knowledge vs. proof, and what all this has to do with black holes and quantum gravity.

So, OK, I guess I should also comment on the national disintegration thing. As someone who was once himself the victim of a crazy police overreaction (albeit, trivial compared to what African-Americans regularly deal with), I was moved by the scenes of police chiefs in several American towns taking off their helmets and joining protesters to cheers. Not only is that a deeply moral thing to do, but it serves a practical purpose of quickly defusing the protests. Right now, of course, is an even worse time than usual for chaos in the streets, with a lethal virus still spreading that doesn’t care whether people are congregating for good or for ill. If rational discussion of policy still matters, I support the current push to end the “qualified immunity” doctrine, end the provision of military training and equipment to police, and generally spur the nation’s police to rein in their psychopath minority.

Quantum Computing Lecture Notes 2.0

Wednesday, May 20th, 2020

Two years ago, I posted detailed lecture notes on this blog for my Intro to Quantum Information Science undergrad course at UT Austin. Today, with enormous thanks to UT PhD student Corey Ostrove, we’ve gotten the notes into a much better shape (for starters, they’re now in LaTeX). You can see the results here (7MB)—it’s basically a 260-page introductory quantum computing textbook in beta form, covering similar material as many other introductory quantum computing textbooks, but in my style for those who like that. It’s missing exercises, as well as material on quantum supremacy experiments, recent progress in hardware, etc., but that will be added in the next version if there’s enough interest. Enjoy!

Unrelated Announcement: Bjorn Poonen at MIT pointed me to, a great resource for finding out about technical talks that are being held online in the era of covid. The developers recently added CS as a category, but so far there are very few CS talks listed. Please help fix that!