Archive for the ‘Complexity’ Category

My Utility+ podcast with Matthew Putman

Thursday, September 3rd, 2020

Another Update (Sep. 15): Sorry for the long delay; new post coming soon! To tide you over—or just to distract you from the darkness figuratively and literally engulfing our civilization—here’s a Fortune article about today’s announcement by IBM of its plans for the next few years in superconducting quantum computing, with some remarks from yours truly.

Another Update (Sep. 8): A reader wrote to let me know about a fundraiser for Denys Smirnov, a 2015 IMO gold medalist from Ukraine who needs an expensive bone marrow transplant to survive Hodgkin’s lymphoma. I just donated and I hope you’ll consider it too!

Update (Sep. 5): Here’s another quantum computing podcast I did, “Dunc Tank” with Duncan Gammie. Enjoy!



Thanks so much to Shtetl-Optimized readers, so far we’ve raised $1,371 for the Biden-Harris campaign and $225 for the Lincoln Project, which I intend to match for $3,192 total. If you’d like to donate by tonight (Thursday night), there’s still $404 to go!

Meanwhile, a mere three days after declaring my “new motto,” I’ve come up with a new new motto for this blog, hopefully a more cheerful one:

When civilization seems on the brink of collapse, sometimes there’s nothing left to talk about but maximal separations between randomized and quantum query complexity.

On that note, please enjoy my new one-hour podcast on Spotify (if that link doesn’t work, try this one) with Matthew Putman of Utility+. Alas, my umming and ahhing were more frequent than I now aim for, but that’s partly compensated for by Matthew’s excellent decision to speed up the audio. This was an unusually wide-ranging interview, covering everything from SlateStarCodex to quantum gravity to interdisciplinary conferences to the challenges of teaching quantum computing to 7-year-olds. I hope you like it!

The Busy Beaver Frontier

Thursday, July 23rd, 2020

Update (July 27): I now have a substantially revised and expanded version (now revised and expanded even a second time), which incorporates (among other things) the extensive feedback that I got from this blog post. There are new philosophical remarks, some lovely new open problems, and an even-faster-growing (!) integer sequence. Check it out!

Another Update (August 13): Nick Drozd now has a really nice blog post about his investigations of my Beeping Busy Beaver (BBB) function.


A life that was all covid, cancellations, and Trump, all desperate rearguard defense of the beleaguered ideals of the Enlightenment, would hardly be worth living. So it was an exquisite delight, these past two weeks, to forget current events and write an 18-page survey article about the Busy Beaver function: the staggeringly quickly-growing function that probably encodes a huge portion of all interesting mathematical truth in its first hundred values, if only we could know those values or exploit them if we did.

Without further ado, here’s the title, abstract, and link:

The Busy Beaver Frontier
by Scott Aaronson

The Busy Beaver function, with its incomprehensibly rapid growth, has captivated generations of computer scientists, mathematicians, and hobbyists. In this survey, I offer a personal view of the BB function 58 years after its introduction, emphasizing lesser-known insights, recent progress, and especially favorite open problems. Examples of such problems include: when does the BB function first exceed the Ackermann function? Is the value of BB(20) independent of set theory? Can we prove that BB(n+1)>2BB(n) for large enough n? Given BB(n), how many advice bits are needed to compute BB(n+1)? Do all Busy Beavers halt on all inputs, not just the 0 input? Is it decidable whether BB(n) is even or odd?

The article is slated to appear soon in SIGACT News. I’m grateful to Bill Gasarch for suggesting it—even with everything else going on, this was a commission I felt I couldn’t turn down!

Besides Bill, I’m grateful to the various Busy Beaver experts who answered my inquiries, to Marijn Heule and Andy Drucker for suggesting some of the open problems, to Marijn for creating a figure, and to Lily, my 7-year-old daughter, for raising the question about the first value of n at which the Busy Beaver function exceeds the Ackermann function. (Yes, Lily’s covid homeschooling has included multiple lessons on very large positive integers.)

There are still a few days until I have to deliver the final version. So if you spot anything wrong or in need of improvement, don’t hesitate to leave a comment or send an email. Thanks in advance!

Of course Busy Beaver has been an obsession that I’ve returned to many times in my life: for example, in that Who Can Name the Bigger Number? essay that I wrote way back when I was 18, in Quantum Computing Since Democritus, in my public lecture at Festivaletteratura, and in my 2016 paper with Adam Yedidia that showed that the values of all Busy Beaver numbers beyond the 7910th are independent of the axioms of set theory (Stefan O’Rear has since shown that independence starts at the 748th value or sooner). This survey, however, represents the first time I’ve tried to take stock of BusyBeaverology as a research topic—collecting in one place all the lesser-known theorems and empirical observations and open problems that I found the most striking, in the hope of inspiring not just contemplation or wonderment but actual progress.

Within the last few months, the world of deep mathematics that you can actually explain to a child lost two of its greatest giants: John Conway (who died of covid, and who I eulogized here) and Ron Graham. One thing I found poignant, and that I didn’t know before I started writing, is that Conway and Graham both play significant roles in the story of the Busy Beaver function. Conway, because most of the best known candidates for Busy Beaver Turing machines turn out, when you analyze them, to be testing variants of the notorious Collatz Conjecture—and Conway is the one who proved, in 1972, that the set of “Collatz-like questions” is Turing-undecidable. And Graham because of Graham’s number from Ramsey theory—a candidate for the biggest number that’s ever played a role in mathematical research—and because of the discovery, four years ago, that the 18th Busy Beaver number exceeds Graham’s number.

(“Just how big is Graham’s number? So big that the 17th Busy Beaver number is not yet known to exceed it!”)

Anyway, I tried to make the survey pretty accessible, while still providing enough technical content to sink one’s two overgrown front teeth into (don’t worry, there are no such puns in the piece itself). I hope you like reading it at least 1/BB(10) as much as I liked writing it.

Update (July 24): Longtime commenter Joshua Zelinsky gently reminded me that one of the main questions discussed in the survey—namely, whether we can prove BB(n+1)>2BB(n) for all large enough n—was first brought to my attention by him, Joshua, in a 2013 Ask-Me-Anything session on this blog! I apologize to Joshua for the major oversight, which has now been corrected. On the positive side, we just got a powerful demonstration both of the intellectual benefits of blogging, and of the benefits of sharing paper drafts on one’s blog before sending them to the editor!

Is this blog obsolete?

Monday, July 20th, 2020

Longtime friend-of-the-blog Michael Nielsen, formerly a quantum computing theorist and now an independent thinker about science and AI and learning, posted the following exchange with the astounding GPT-3 text generator / chatbot:

Q: Is it possible to use quantum computers to solve NP-complete problems in polynomial time?
A: It is not possible to use quantum computers to solve NP-complete problems in polynomial time.

Q: Why not?
A: Because NP-complete problems are not in BQP.

Q: You’ve just restated your earlier assertion! What I’d like is to understand the underlying reason why not!
A: I’m sorry, but I don’t have time to explain the underlying reason why not.

Q: No offence, but this seems really bogus. I don’t believe you understand the reason why!
A: I’m sorry, but I don’t have time to explain the underlying reason why not.

(And then there are a few more rounds of the AI not having time to “explain the underlying reason why not.”)

This is, like, 80% of the way to replacing Shtetl-Optimized!

For much more discussion of GPT-3 and its implications, and samples of its output, see for example the SSC subreddit. At the moment, as far as I can tell, the closest a person off the street can easily come to experimenting with GPT-3 themselves is using a website called AI Dungeon.

And yes, as many have already remarked, this is clearly the MITS Altair of text-generating AI, an amusing toy that’s also the start of something that will change the world.

Quantum Computing Since Democritus: New Foreword!

Saturday, June 20th, 2020

Time for a non-depressing post. Quantum Computing Since Democritus, which is already available in English and Russian, is about to be published in both Chinese and Japanese. (So if you read this blog, but have avoided tackling QCSD because your Chinese or Japanese is better than your English, today’s your day!) To go along with the new editions, Cambridge University Press asked me to write a new foreword, reflecting on what happened in the seven years since the book was published. The editor, Paul Dobson, kindly gave me permission to share the new foreword on my blog. So without further ado…


Quantum Computing Since Democritus began its life as a course that I taught at the University of Waterloo in 2006.  Seven years later, it became the book that you now hold.  Its preface ended with the following words:

Here’s hoping that, in 2020, this book will be as badly in need of revision as the 2006 lecture notes were in 2013.

As I write this, in June 2020, a lot has happened that I would never have predicted in 2013.  Donald Trump is the President of the United States, and is up for reelection shortly.  This is not a political book, so let me resist the urge to comment further.  Meanwhile, the coronavirus pandemic is ravaging the world, killing hundreds of thousands of people, crashing economies, and shutting down schools and universities (including mine).  And in the past few weeks, protests against racism and police brutality started in America and then spread to the world, despite the danger of protesting during a pandemic.

Leaving aside the state of the world, my own life is also very different than it was seven years ago.  Along with my family, I’ve moved from MIT to the University of Texas in Austin.  My daughter, who was born at almost exactly the same time as Quantum Computing Since Democritus, is now a first-grader, and is joined by a 3-year-old son.  When my daughter’s school shut down due to the coronavirus, I began home-schooling her in math, computer science, and physics—in some of the exact same topics covered in this book.  I’m now engaged in an experiment to see what portion of this material can be made accessible to a 7-year-old.

But what about the material itself?  How has it held up over seven years?  Both the bad news and the (for you) good news, I suppose, is that it’s not particularly out of date.  The intellectual underpinnings of quantum computing and its surrounding disciplines remain largely as they were.  Still, let me discuss what has changed.

Between 2013 and 2020, the field of quantum computing made a striking transition, from a mostly academic pursuit to a major technological arms race.  The Chinese government, the US government, and the European Union have all pledged billions of dollars for quantum computing research.  Google, Microsoft, IBM, Amazon, Alibaba, Intel, and Honeywell also now all have well-funded groups tasked with building quantum computers, or providing quantum-computing-related software and services, or even just doing classical computing that’s “quantum-inspired.”  These giants are joined by dozens of startups focused entirely on quantum computing.

The new efforts vary greatly in caliber; some efforts seem rooted in visions of what quantum computers will be able to help with, and how soon, that I find to be wildly overoptimistic or even irresponsible.  But perhaps it’s always this way when a new technology moves from an intellectual aspiration to a commercial prospect.  Having joined the field around 1999, before there were any commercial efforts in quantum computing, I’ve found the change disorienting.

But while some of the new excitement is based on pure hype—on marketers now mixing some “quantum” into their word-salad of “blockchain,” “deep learning,” etc., with no particular understanding of any of the ingredients—there really have been some scientific advances in quantum computing since 2013, a fire underneath the smoke.

Surely the crowning achievement of quantum computing during this period was the achievement of “quantum supremacy,” which a team at Google announced in the fall of 2019.  For the first time, a programmable quantum computer was used to outperform any classical computer on earth, running any currently known algorithm.  Google’s device, called “Sycamore,” with 53 superconducting qubits cooled to a hundredth of a degree above absolute zero, solved a well-defined albeit probably useless sampling problem in about 3 minutes.  To compare, current state-of-the-art simulations on classical computers need a few days, even with hundreds of thousands of parallel processors.  Ah, but will a better classical simulation be possible?  That’s an open question in quantum complexity!  The discussion of that question draws on theoretical work that various colleagues and I did over the past decade.  That work in turn draws on my so-called PostBQP=PP theorem from 2004, explained in this book.

In the past seven years, there were also several breakthroughs in quantum computing theory—some of which resolved open problems mentioned in this book. 

In 2018, Ran Raz and Avishay Tal gave an oracle relative to which BQP (Bounded-Error Quantum Polynomial-Time) is not contained in PH (the Polynomial Hierarchy).  This solved one of the main open questions, since 1993, about where BQP fits in with classical complexity classes, at least in the black-box setting.  (What does that mean?  Read the book!)  Raz and Tal’s proof used a candidate problem that I had defined in 2009 and called “Forrelation.”

Also in 2018, Urmila Mahadev gave a protocol, based on cryptography, by which a polynomial-time quantum computer (i.e., a BQP machine) could always prove the results of its computation to a classical polynomial-time skeptic, purely by exchanging classical messages with the skeptic.  Following Urmila’s achievement, I was delighted to give her a $25 prize for solving the problem that I’d announced on my blog back in 2007.

Perhaps most spectacularly of all, in 2020, Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen proved that MIP*=RE.  Here MIP* means the class of problems solvable using multi-prover interactive proof systems with quantumly entangled provers (and classical polynomial-time verifiers), while RE means Recursively Enumerable: a class that includes not only all the computable problems, but even the infamous halting problem (!).  To say it more simply, entangled provers can convince a polynomial-time verifier that an arbitrary Turing machine halts.  Besides its intrinsic interest, a byproduct of this breakthrough was to answer a decades-old question in pure math, the so-called Connes Embedding Conjecture (by refuting the conjecture).  To my knowledge, the new result represents the first time that quantum computing has reached “all the way up the ladder of hardness” to touch uncomputable problems.  It’s also the first time that non-relativizing techniques, like the ones central to the study of interactive proofs, were ever used in computability theory.

In a different direction, the last seven years have witnessed an astonishing convergence between quantum information and quantum gravity—something that was just starting when Quantum Computing Since Democritus appeared in 2013, and that I mentioned as an exciting new direction.  Since then, the so-called “It from Qubit” collaboration has brought together quantum computing theorists with string theorists and former string theorists—experts in things like the black hole information problem—to develop a shared language.  One striking proposal that’s emerged from this is a fundamental role for quantum circuit complexity—that is, the smallest number of 1- and 2-qubit gates needed to prepare a given n-qubit state from the all-0 state—in the so-called AdS/CFT (Anti de Sitter / Conformal Field Theory) correspondence.  AdS/CFT is a duality between physical theories involving different numbers of spatial dimensions; for more than twenty years, it’s been a central testbed for ideas about quantum gravity.  But the duality is extremely nonlocal: a “simple” quantity in the AdS theory, like the volume of a wormhole, can correspond to an incredibly “complicated” quantity in the dual CFT.  The new proposal is that the CFT quantity might be not just complicated, but literally circuit complexity itself.  Fanciful as that sounds, the truth is that no one has come up with any other proposal that passes the same sanity checks.  A related new insight is that the nonlocal mapping between the AdS and CFT theories is not merely analogous to, but literally an example of, a quantum error-correcting code: the same mathematical objects that will be needed to build scalable quantum computers.

When Quantum Computing Since Democritus was first published, some people thought it went too far in elevating computer science, and computational complexity in particular, to fundamental roles in understanding the physical world.  But even I wasn’t audacious enough to posit connections like the ones above, which are now more-or-less mainstream in quantum gravity research.

I’m proud that I wrote Quantum Computing Since Democritus, but as the years go by, I find that I have no particular desire to revise it, or even reread it.  It seems far better for the book to stand as a record of what I knew and believed and cared about at a certain moment in time.

The intellectual quest that’s defined my life—the quest to wrap together computation, physics, math, and philosophy into some sort of coherent picture of the world—might never end.  But it does need to start somewhere.  I’m honored that you chose Quantum Computing Since Democritus as a place to start or continue your own quest.  I hope you enjoy it.

Scott Aaronson
Austin, Texas
June 2020

The US might die, but P and PSPACE are forever

Monday, June 1st, 2020

Today, I interrupt the news of the rapid disintegration of the United States of America, on every possible front at once (medical, economic, social…), to bring you something far more important: a long-planned two-hour podcast, where theoretical physicist and longtime friend-of-the-blog Sean Carroll interviews yours truly about complexity theory! Here’s Sean’s description of this historic event:

There are some problems for which it’s very hard to find the answer, but very easy to check the answer if someone gives it to you. At least, we think there are such problems; whether or not they really exist is the famous P vs NP problem, and actually proving it will win you a million dollars. This kind of question falls under the rubric of “computational complexity theory,” which formalizes how hard it is to computationally attack a well-posed problem. Scott Aaronson is one of the world’s leading thinkers in computational complexity, especially the wrinkles that enter once we consider quantum computers as well as classical ones. We talk about how we quantify complexity, and how that relates to ideas as disparate as creativity, knowledge vs. proof, and what all this has to do with black holes and quantum gravity.

So, OK, I guess I should also comment on the national disintegration thing. As someone who was once himself the victim of a crazy police overreaction (albeit, trivial compared to what African-Americans regularly deal with), I was moved by the scenes of police chiefs in several American towns taking off their helmets and joining protesters to cheers. Not only is that a deeply moral thing to do, but it serves a practical purpose of quickly defusing the protests. Right now, of course, is an even worse time than usual for chaos in the streets, with a lethal virus still spreading that doesn’t care whether people are congregating for good or for ill. If rational discussion of policy still matters, I support the current push to end the “qualified immunity” doctrine, end the provision of military training and equipment to police, and generally spur the nation’s police to rein in their psychopath minority.

Quantum Computing Lecture Notes 2.0

Wednesday, May 20th, 2020

Two years ago, I posted detailed lecture notes on this blog for my Intro to Quantum Information Science undergrad course at UT Austin. Today, with enormous thanks to UT PhD student Corey Ostrove, we’ve gotten the notes into a much better shape (for starters, they’re now in LaTeX). You can see the results here (7MB)—it’s basically a 260-page introductory quantum computing textbook in beta form, covering similar material as many other introductory quantum computing textbooks, but in my style for those who like that. It’s missing exercises, as well as material on quantum supremacy experiments, recent progress in hardware, etc., but that will be added in the next version if there’s enough interest. Enjoy!

Unrelated Announcement: Bjorn Poonen at MIT pointed me to researchseminars.org, a great resource for finding out about technical talks that are being held online in the era of covid. The developers recently added CS as a category, but so far there are very few CS talks listed. Please help fix that!

Four striking papers

Wednesday, May 13th, 2020

In the past week or two, four striking papers appeared on quant-ph. Rather than doing my usual thing—envisioning a huge, meaty blog post about each paper, but then procrastinating on writing them until the posts are no longer even relevant—I thought I’d just write a paragraph about each paper and then open things up for discussion.

(1) Matt Hastings has announced the first provable superpolynomial black-box speedup for the quantum adiabatic algorithm (in its original, stoquastic version). The speedup is only quasipolynomial (nlog(n)) rather than exponential, and it’s for a contrived example (just like in the important earlier work by Freedman and Hastings, which separated the adiabatic algorithm from Quantum Monte Carlo), and there are no obvious near-term practical implications. But still! Twenty years after Farhi and his collaborators wrote the first paper on the quantum adiabatic algorithm, and 13 years after D-Wave made its first hype-laden announcement, this is (to my mind) the first strong theoretical indication that adiabatic evolution with no sign problem can ever get a superpolynomial speedup over not only simulated annealing, not only Quantum Monte Carlo, but all possible classical algorithms. (This had previously been shown only for a variant of the adiabatic algorithm that jumps up to the first excited state, by Nagaj, Somma, and Kieferova.) As such, assuming the result holds up, Hastings resolves a central question that I (for one) had repeatedly asked about for almost 20 years. Indeed, if memory serves, at an Aspen quantum algorithms meeting a few years ago, I strongly urged Hastings to work on the problem. Congratulations to Matt!

(2) In my 2009 paper “Quantum Copy-Protection and Quantum Money,” I introduced the notion of copy-protected quantum software: a state |ψf⟩ that you could efficiently use to evaluate a function f, but not to produce more states (whether |ψf⟩ or anything else) that would let others evaluate f. I gave candidate constructions for quantumly copy-protecting the simple class of “point functions” (e.g., recognizing a password), and I sketched a proof that quantum copy-protection of arbitrary functions (except for those efficiently learnable from their input/output behavior) was possible relative to a quantum oracle. Building on an idea of Paul Christiano, a couple weeks ago my PhD student Jiahui Liu, Ruizhe Zhang, and I put a preprint on the arXiv improving that conclusion, to show that quantum copy-protection of arbitrary unlearnable functions is possible relative to a classical oracle. But my central open problem remained unanswered: is quantum copy-protection of arbitrary (unlearnable) functions possible in the real world, with no oracle? A couple days ago, Ananth and La Placa put up a preprint where they claim to show that the answer is no, assuming that there’s secure quantum Fully Homomorphic Encryption (FHE) of quantum circuits. I haven’t yet understood the construction, but it looks plausible, and indeed closely related to Barak et al.’s seminal proof of the impossibility of obfuscating arbitrary programs in the classical world. If this holds up, it (conditionally) resolves another of my favorite open problems—indeed, one that I recently mentioned in the Ask-Me-Anything session!

(3) Speaking of Boaz Barak: he, Chi-Ning Chou, and Xun Gao have a new preprint about a fast classical way to spoof Google’s linear cross-entropy benchmark for shallow random quantum circuits (with a bias that degrades exponentially with the depth, remaining detectable up to a depth of say ~√log(n)). As the authors point out, this by no means refutes Google’s supremacy experiment, which involved a larger depth. But along with other recent results in the same direction (e.g. this one), it does show that some exploitable structure is present even in random quantum circuits. Barak et al. achieve their result by simply looking at the marginal distributions on the individual output qubits (although the analysis to show that this works gets rather hairy). Boaz had told me all about this work when I saw him in person—back when traveling and meeting people in person was a thing!—but it’s great to see it up on the arXiv.

(4) Peter and Raphaël Clifford have announced a faster classical algorithm to simulate BosonSampling. To be clear, their algorithm is still exponential-time, but for the special case of a Haar-random scattering matrix, n photons, and m=n input and output modes, it runs in only ~1.69n time, as opposed to the previous bound of ~2n. The upshot is that, if you want to achieve quantum supremacy using BosonSampling, then either you need more photons than previously thought (maybe 90 photons? 100?), or else you need a lot of modes (in our original paper, Arkhipov and I recommended at least m~n2 modes for several reasons, but naturally the experimentalists would like to cut any corners they can).

And what about my own “research program”? Well yesterday, having previously challenged my 7-year-old daughter Lily with instances of comparison sorting, Eulerian tours, undirected connectivity, bipartite perfect matching, stable marriage, factoring, graph isomorphism, unknottedness, 3-coloring, subset sum, and traveling salesman, I finally introduced her to the P vs. NP problem! Even though Lily can’t yet formally define “polynomial,” let alone “algorithm,” I’m satisfied that she understands something of what’s being asked. But, in an unintended echo of one of my more controversial recent posts, Lily insists on pronouncing NP as “nip.”

Announcements

Friday, May 8th, 2020

Update (May 10): Extremely sorry to everyone who wanted to attend my SlateStarCodex talk on quantum necromancy, but wasn’t able due to technical problems! My PowerPoint slides are here; a recording might be made available later. Thanks to everyone who attended and asked great questions. Even though there were many, many bugs to be worked out, I found giving my first talk in virtual reality a fascinating experience; thanks so much to Patrick V. for inviting me and setting it up.

(1) I’ll be giving an online talk at SlateStarCodex (actually, in a VR room where you can walk around with your avatar, mingle, and even try to get “front-row seating”), this coming Sunday at 10:30am US Pacific time = 12:30pm US Central time (i.e., me) = 1:30pm US Eastern time = … Here’s the abstract:

Schrödinger’s Cat and Quantum Necromancy

I’ll try, as best I can, to give a 10-minute overview of the century-old measurement problem of quantum mechanics.  I’ll then discuss a new result, by me and Yosi Atia, that might add a new wrinkle to the problem.  Very roughly, our result says that if you had the technological ability, as measured by (say) quantum circuit complexity, to prove that a cat was in a coherent superposition of the alive and dead states, then you’d necessarily also have the technological ability to bring a dead cat back to life.  Of course, this raises the question of in what sense such a cat was ever “dead” in the first place.

(2) Robin Kothari has a beautiful blog post about a new paper by me, him, Avishay Tal, and Shalev Ben-David, which uses Huang’s recent breakthrough proof of the Sensitivity Conjecture to show that D(f)=O(Q(f)4) for all total Boolean functions f, where D(f) is the deterministic query complexity of f and Q(f) is the quantum query complexity—thereby resolving another longstanding open problem (the best known relationship since 1998 had been D(f)=O(Q(f)6)). Check out his post!

(3) For all the people who’ve been emailing me, and leaving blog comments, about Stephen Wolfram’s new model of fundamental physics (his new new kind of science?)—Adam Becker now has an excellent article for Scientific American, entitled Physicists Criticize Stephen Wolfram’s “Theory of Everything.” The article quotes me, friend-of-the-blog Daniel Harlow, and several others. The only thing about Becker’s piece that I disagreed with was the amount of space he spent on process (e.g. Wolfram’s flouting of traditional peer review). Not only do I care less and less about such things, but I worry that harping on them feeds directly into Wolfram’s misunderstood-genius narrative. Why not use the space to explain how Wolfram makes a hash of quantum mechanics—e.g., never really articulating how he proposes to get unitarity, or the Born rule, or even a Hilbert space? Anyway, given the demand, I guess I’ll do a separate blog post about this when I have time. (Keep in mind that, with my kids home from school, I have approximately 2 working hours per day.)

(4) Oh yeah, I forgot! Joshua Zelinsky pointed me to a website by Martin Ugarte, which plausibly claims to construct a Turing machine with only 748 states whose behavior is independent of ZF set theory—beating the previous claimed record of 985 states due to Stefan O’Rear (see O’Rear’s GitHub page), which in turn beat the 8000 states of me and Adam Yedidia (see my 2016 blog post about this). I should caution that, to my knowledge, the new construction hasn’t been peer-reviewed, let alone proved correct in a machine-checkable way (well, the latter hasn’t yet been done for any of these constructions). For that matter, while an absolutely beautiful interface is provided, I couldn’t even find documentation for the new construction. Still, Turing machine and Busy Beaver aficionados will want to check it out!

Paperz

Tuesday, March 3rd, 2020

Soon, all anyone will want to talk about is quarantines, food shortages, N95 masks, the suspension of universities and of scientific conferences. (As many others have pointed out, this last might actually be a boon to scientific productivity—as it was for a young Isaac Newton when Cambridge was closed for the bubonic plague, so Newton went home and invented calculus and mechanics.)

Anyway, before that all happens, I figured I’d get in a last post about quantum information and complexity theory progress.

Hsin-Yuan Huang, Richard Kueng, and John Preskill have a nice preprint entitled Predicting Many Properties of a Quantum System from Very Few Measurements. In it they take shadow tomography, which I proposed a couple years ago, and try to bring it closer to practicality on near-term devices, by restricting to the special case of non-adaptive, one-shot measurements, on separate copies of the state ρ that you’re trying to learn about. They show that this is possible using a number of copies that depends logarithmically on the number of properties you’re trying to learn (the optimal dependence), not at all on the Hilbert space dimension, and linearly on a new “shadow norm” quantity that they introduce.

Rahul Ilango, Bruno Loff, and Igor Oliveira announced the pretty spectacular-sounding result that the Minimum Circuit Size Problem (MCSP) is NP-complete for multi-output functions—that is, for Boolean functions f with not only many input bits but many outputs. Given the 2n-sized truth table of a Boolean function f:{0,1}n→{0,1}, the original MCSP simply asks for the size of the smallest Boolean circuit that computes f. This problem was studied in the USSR as early as the 1950s; whether it’s NP-complete has stood for decades as one of the big open problems of complexity theory. We’ve known that if you could quickly solve MCSP then you could also invert any one-way function, but we’ve also known technical barriers to going beyond that to a flat-out NP-hardness result, at least via known routes. Before seeing this paper, I’d never thought about whether MCSP for many-output functions might somehow be easier to classify, but apparently it is!

Hamoon Mousavi, Seyed Nezhadi, and Henry Yuen have now taken the MIP*=RE breakthrough even a tiny step further, by showing that “zero-gap MIP*” (that is, quantum multi-prover interactive proofs with an arbitrarily small gap between the completeness and soundness probabilities) takes you even beyond the halting problem (i.e., beyond Recursively Enumerable or RE), and up to the second level of the arithmetical hierarchy (i.e., to the halting problem for Turing machines with oracles for the original halting problem). This answers a question that someone asked in the comments section of this blog.

Several people asked me for comment on the paper What limits the simulation of quantum computers?, by Yiqing Zhou, Miles Stoudenmire, and Xavier Waintal. In particular, does this paper refute or weaken Google’s quantum supremacy claim, as the paper does not claim to do (but, rather coyly, also does not claim not to do)? Short answer: No, it doesn’t, or not now anyway.

Longer, more technical answer: The quoted simulation times, just a few minutes for quantum circuits with 54 qubits and depth 20, assume Controlled-Z gates rather than iSWAP-like gates. Using tensor network methods, the classical simulation cost with the former is roughly the square root of the simulation cost with the latter (~2k versus ~4k for some parameter k related to the depth). As it happens, Google switched its hardware from Controlled-Z to iSWAP-like gates a couple years ago precisely because they realized this—I had a conversation about it with Sergio Boixo at the time. Once this issue is accounted for, the quoted simulation times in the new paper seem to be roughly in line with what was previously reported by, e.g., Johnnie Gray and Google itself.

Oh yeah, I enjoyed Quantum Homeopathy Works. Cool result, and the title is actually a pretty accurate description of the contents.

To end with a community announcement: as many of you might know, the American Physical Society’s March Meeting, which was planned for this week in Denver, was abruptly cancelled due to the coronavirus (leaving thousands of physicists out their flights and hotel rooms—many had even already arrived there). However, my colleague Michael Biercuk kindly alerted me to a “virtual March Meeting” that’s been set up online, with recorded talks and live webinars. Even after the pandemic passes, is this a model that we should increasingly move to? I wouldn’t have thought so ten or fifteen years ago, but today every schlep across the continent brings me a step closer to shouting “yes”…

MIP*=RE

Tuesday, January 14th, 2020

Another Update (Jan. 16): Yet another reason to be excited about this result—one that somehow hadn’t occurred to me—is that, as far as I know, it’s the first-ever fully convincing example of a non-relativizing computability result. See this comment for more.

Update: If you’re interested in the above topic, then you should probably stop reading this post right now, and switch to this better post by Thomas Vidick, one of the authors of the new breakthrough. (Or this by Boaz Barak or this by Lance Fortnow or this by Ken Regan.) (For background, also see Thomas Vidick’s excellent piece for the AMS Notices.)

Still here? Alright, alright…

Here’s the paper, which weighs in at 165 pages. The authors are Zhengfeng Ji, Anand Natarajan, my former postdoc Thomas Vidick, John Wright (who will be joining the CS faculty here at UT Austin this fall), and my wife Dana’s former student Henry Yuen. Rather than pretending that I can provide intelligent commentary on this opus in the space of a day, I’ll basically just open my comment section to discussion and quote the abstract:

We show that the class MIP* of languages that can be decided by a classical verifier interacting with multiple all-powerful quantum provers sharing entanglement is equal to the class RE of recursively enumerable languages. Our proof builds upon the quantum low-degree test of (Natarajan and Vidick, FOCS 2018) by integrating recent developments from (Natarajan and Wright, FOCS 2019) and combining them with the recursive compression framework of (Fitzsimons et al., STOC 2019).
An immediate byproduct of our result is that there is an efficient reduction from the Halting Problem to the problem of deciding whether a two-player nonlocal game has entangled value 1 or at most 1/2. Using a known connection, undecidability of the entangled value implies a negative answer to Tsirelson’s problem: we show, by providing an explicit example, that the closure Cqa of the set of quantum tensor product correlations is strictly included in the set Cqc of quantum commuting correlations. Following work of (Fritz, Rev. Math. Phys. 2012) and (Junge et al., J. Math. Phys. 2011) our results provide a refutation of Connes’ embedding conjecture from the theory of von Neumann algebras.

To say it differently (in response to a commenter’s request), some of the major implications are as follows.

(1) There is a protocol by which two entangled provers can convince a polynomial-time verifier of the answer to any computable problem whatsoever (!!), or indeed that a given Turing machine halts.

(2) There is a two-prover game, analogous to the Bell/CHSH game, for which Alice and Bob can do markedly better with a literally infinite amount of entanglement than they can with any finite amount of entanglement.

(3) There is no algorithm even to approximate the entangled value of a two-prover game (i.e., the probability that Alice and Bob win the game, if they use the best possible strategy and as much entanglement as they like). Instead, this problem is equivalent to the halting problem.

(4) There are types of correlations between Alice and Bob that can be produced using infinite entanglement, but that can’t even be approximated using any finite amount of entanglement.

(5) The Connes embedding conjecture, a central conjecture from the theory of operator algebras dating back to the 1970s, is false.

Note that all of these implications—including the ones for pure math and the foundations of quantum physics—were obtained using tools that originated in theoretical computer science, specifically the study of interactive proof systems.

I can remember when the class MIP* was first defined and studied, back around 2003, and people made the point that we didn’t know any reasonable upper bound on the class’s power—not NEXP, not NEEEEXP, not even the set of all computable languages. Back then, the joke was how far our proof techniques were from what was self-evidently the truth. I don’t remember a single person who seriously contemplated that two entangled provers could convince a polynomial-time verifier than an arbitrary Turing machine halts.

Still, ever since Natarajan and Wright’s NEEXP in MIP* breakthrough last year, all of us in quantum computing theory knew that MIP*=RE was a live possibility—and all through the summer and fall, I heard many hints that such a breakthrough was imminent.

It’s worth pointing out that, with only classical correlations between the provers, MIP gives “merely” the power of NEXP (Nondeterministic Exponential Time), while with arbitrary non-signalling correlations between the provers, the so-called MIPns gives the power of EXP (Deterministic Exponential Time). So it’s particularly striking that quantum entanglement, which is “intermediate” between classical correlations and arbitrary non-signalling correlations, yields such wildly greater computational power than either of those two.

The usual proviso applies: when I’ve blogged excitedly about preprints with amazing new results, most have stood, but at least two ended up being retracted. Still, assuming this one stands (as I’m guessing it will), I regard it as easily one of the biggest complexity-theoretic (and indeed computability-theoretic!) surprises so far in this century. Huge congratulations to the authors on what looks to be a historic achievement.

In unrelated news, for anyone for whom the 165-page MIP* paper is too heavy going (really??), please enjoy this CNBC video on quantum computing, which features several clips of yours truly speaking in front of a fake UT tower.

In other unrelated news, I’m also excited about this preprint by Avishay Tal, which sets a new record for the largest known separation between quantum query complexity and classical randomized query complexity, making substantial progress toward proving a conjecture by me and Andris Ambainis from 2015. (Not the “Aaronson-Ambainis Conjecture,” a different conjecture.)