Archive for the ‘Announcements’ Category

Scott’s Supreme Quantum Supremacy FAQ!

Monday, September 23rd, 2019

You’ve seen the stories—in the Financial Times, Technology Review, CNET, Facebook, Reddit, Twitter, or elsewhere—saying that a group at Google has now achieved quantum computational supremacy with a 53-qubit superconducting device. While these stories are easy to find, I’m not going to link to them here, for the simple reason that none of them were supposed to exist yet.

As the world now knows, Google is indeed preparing a big announcement about quantum supremacy, to coincide with the publication of its research paper in a high-profile journal (which journal? you can probably narrow it down to two). This will hopefully happen within a month.

Meanwhile, though, NASA, which has some contributors to the work, inadvertently posted an outdated version of the Google paper on a public website. It was there only briefly, but long enough to make it to the Financial Times, my inbox, and millions of other places. Fact-free pontificating about what it means has predictably proliferated.

The world, it seems, is going to be denied its clean “moon landing” moment, wherein the Extended Church-Turing Thesis gets experimentally obliterated within the space of a press conference. This is going to be more like the Wright Brothers’ flight—about which rumors and half-truths leaked out in dribs and drabs between 1903 and 1908, the year Will and Orville finally agreed to do public demonstration flights. (This time around, though, it thankfully won’t take that long to clear everything up!)

I’ve known about what was in the works for a couple months now; it was excruciating not being able to blog about it. Though sworn to secrecy, I couldn’t resist dropping some hints here and there (did you catch any?)—for example, in my recent Bernays Lectures in Zürich, a lecture series whose entire structure built up to the brink of this moment.

This post is not an official announcement or confirmation of anything. Though the lightning may already be visible, the thunder belongs to the group at Google, at a time and place of its choosing.

Rather, because so much misinformation is swirling around, what I thought I’d do here, in my role as blogger and “public intellectual,” is offer Scott’s Supreme Quantum Supremacy FAQ. You know, just in case you were randomly curious about the topic of quantum supremacy, or wanted to know what the implications would be if some search engine company based in Mountain View or wherever were hypothetically to claim to have achieved quantum supremacy.

Without further ado, then:

Q1. What is quantum computational supremacy?

Often abbreviated to just “quantum supremacy,” the term refers to the use of a quantum computer to solve some well-defined set of problems that would take orders of magnitude longer to solve with any currently known algorithms running on existing classical computers—and not for incidental reasons, but for reasons of asymptotic quantum complexity. The emphasis here is on being as sure as possible that the problem really was solved quantumly and really is classically intractable, and ideally achieving the speedup soon (with the noisy, non-universal QCs of the present or very near future). If the problem is also useful for something, then so much the better, but that’s not at all necessary. The Wright Flyer and the Fermi pile weren’t useful in themselves.

Q2. If Google has indeed achieved quantum supremacy, does that mean that now “no code is uncrackable”, as Democratic presidential candidate Andrew Yang recently tweeted?

No, it doesn’t. (But I still like Yang’s candidacy.)

There are two issues here. First, the devices currently being built by Google, IBM, and others have 50-100 qubits and no error-correction. Running Shor’s algorithm to break the RSA cryptosystem would require several thousand logical qubits. With known error-correction methods, that could easily translate into millions of physical qubits, and those probably of a higher quality than any that exist today. I don’t think anyone is close to that, and we have no idea how long it will take.

But the second issue is that, even in a hypothetical future with scalable, error-corrected QCs, on our current understanding they’ll only be able to crack some codes, not all of them. By an unfortunate coincidence, the public-key codes that they can crack include most of what we currently use to secure the Internet: RSA, Diffie-Hellman, elliptic curve crypto, etc. But symmetric-key crypto should only be minimally affected. And there are even candidates for public-key cryptosystems (for example, based on lattices) that no one knows how to break quantumly after 20+ years of trying, and some efforts underway now to start migrating to those systems. For more, see for example my letter to Rebecca Goldstein.

Q3. What calculation is Google planning to do, or has it already done, that’s believed to be classically hard?

So, I can tell you, but I’ll feel slightly sheepish doing so. The calculation is: a “challenger” generates a random quantum circuit C (i.e., a random sequence of 1-qubit and nearest-neighbor 2-qubit gates, of depth perhaps 20, acting on a 2D grid of n = 50 to 60 qubits). The challenger then sends C to the quantum computer, and asks it apply C to the all-0 initial state, measure the result in the {0,1} basis, send back whatever n-bit string was observed, and repeat some thousands or millions of times. Finally, using its knowledge of C, the classical challenger applies a statistical test to check whether the outputs are consistent with the QC having done this.

So, this is not a problem like factoring with a single right answer. The circuit C gives rise to some probability distribution, call it DC, over n-bit strings, and the problem is to output samples from that distribution. In fact, there will typically be 2n strings in the support of DC—so many that, if the QC is working as expected, the same output will never be observed twice. A crucial point, though, is that the distribution DC is not uniform. Some strings enjoy constructive interference of amplitudes and therefore have larger probabilities, while others suffer destructive interference and have smaller probabilities. And even though we’ll only see a number of samples that’s tiny compared to 2n, we can check whether the samples preferentially cluster among the strings that are predicted to be likelier, and thereby build up our confidence that something classically intractable is being done.

So, tl;dr, the quantum computer is simply asked to apply a random (but known) sequence of quantum operations—not because we intrinsically care about the result, but because we’re trying to prove that it can beat a classical computer at some well-defined task.

Q4. But if the quantum computer is just executing some random garbage circuit, whose only purpose is to be hard to simulate classically, then who cares? Isn’t this a big overhyped nothingburger?

No. As I put it the other day, it’s not an everythingburger, but it’s certainly at least a somethingburger!

It’s like, have a little respect for the immensity of what we’re talking about here, and for the terrifying engineering that’s needed to make it reality. Before quantum supremacy, by definition, the QC skeptics can all laugh to each other that, for all the billions of dollars spent over 20+ years, still no quantum computer has even once been used to solve any problem faster than your laptop could solve it, or at least not in any way that depended on its being a quantum computer. In a post-quantum-supremacy world, that’s no longer the case. A superposition involving 250 or 260 complex numbers has been computationally harnessed, using time and space resources that are minuscule compared to 250 or 260.

I keep bringing up the Wright Flyer only because the chasm between what we’re talking about, and the dismissiveness I’m seeing in some corners of the Internet, is kind of breathtaking to me. It’s like, if you believed that useful air travel was fundamentally impossible, then seeing a dinky wooden propeller plane keep itself aloft wouldn’t refute your belief … but it sure as hell shouldn’t reassure you either.

Was I right to worry, years ago, that the constant drumbeat of hype about much less significant QC milestones would wear out people’s patience, so that they’d no longer care when something newsworthy finally did happen?

Q5. Years ago, you scolded the masses for being super-excited about D-Wave, and its claims to get huge quantum speedups for optimization problems via quantum annealing. Today you scold the masses for not being super-excited about quantum supremacy. Why can’t you stay consistent?

Because my goal is not to move the “excitement level” in some uniformly preferred direction, it’s to be right! With hindsight, would you say that I was mostly right about D-Wave, even when raining on that particular parade made me unpopular in some circles? Well, I’m trying to be right about quantum supremacy too.

Q6. If quantum supremacy calculations just involve sampling from probability distributions, how do you check that they were done correctly?

Glad you asked! This is the subject of a fair amount of theory that I and others developed over the last decade. I already gave you the short version in my answer to Q3: you check by doing statistics on the samples that the QC returned, to verify that they’re preferentially clustered in the “peaks” of the chaotic probability distribution DC. One convenient way of doing this, which Google calls the “linear cross-entropy test,” is simply to sum up Pr[C outputs si] over all the samples s1,…,sk that the QC returned, and then to declare the test a “success” if and only if the sum exceeds some threshold—say, bk/2n, for some constant b strictly between 1 and 2.

Admittedly, in order to apply this test, you need to calculate the probabilities Pr[C outputs si] on your classical computer—and the only known ways to calculate them require brute force and take ~2n time. Is that a showstopper? No, not if n is 50, and you’re Google and are able to handle numbers like 250 (although not 21000, which exceeds a googol, har har). By running a huge cluster of classical cores for (say) a month, you can eventually verify the outputs that your QC produced in a few seconds—while also seeing that the QC was many orders of magnitude faster. However, this does mean that sampling-based quantum supremacy experiments are almost specifically designed for ~50-qubit devices like the ones being built right now. Even with 100 qubits, we wouldn’t know how to verify the results using all the classical computing power available on earth.

(Let me stress that this issue is specific to sampling experiments like the ones that are currently being done. If Shor’s algorithm factored a 2000-digit number, it would be easy to check the result by simply multiplying the claimed factors and running a primality test on them. Likewise, if a QC were used to simulate some complicated biomolecule, you could check its results by comparing them to experiment.)

Q7. Wait. If classical computers can only check the results of a quantum supremacy experiment, in a regime where the classical computers can still simulate the experiment (albeit extremely slowly), then how do you get to claim “quantum supremacy”?

Come on. With a 53-qubit chip, it’s perfectly feasible to see a speedup by a factor of many millions, in a regime where you can still directly verify the outputs, and also to see that the speedup is growing exponentially with the number of qubits, exactly as asymptotic analysis would predict. This isn’t marginal.

Q8. Is there a mathematical proof that no fast classical algorithm could possibly spoof the results of a sampling-based quantum supremacy experiment?

Not at present. But that’s not quantum supremacy researchers’ fault! As long as theoretical computer scientists can’t even prove basic conjectures like P≠NP or P≠PSPACE, there’s no hope of ruling out a fast classical simulation unconditionally. The best we can hope for are conditional hardness results. And we have indeed managed to prove some such results—see for example the BosonSampling paper, or the Bouland et al. paper on average-case #P-hardness of calculating amplitudes in random circuits, or my paper with Lijie Chen (“Complexity-Theoretic Foundations of Quantum Supremacy Experiments”). The biggest theoretical open problem in this area, in my opinion, is to prove better conditional hardness results.

Q9. Does sampling-based quantum supremacy have any applications in itself?

When people were first thinking about this subject, it seemed pretty obvious that the answer was “no”! (I know because I was one of the people.) Recently, however, the situation has changed—for example, because of my certified randomness protocol, which shows how a sampling-based quantum supremacy experiment could almost immediately be repurposed to generate bits that can be proven to be random to a skeptical third party (under computational assumptions). This, in turn, has possible applications to proof-of-stake cryptocurrencies and other cryptographic protocols. I’m hopeful that more such applications will be discovered in the near future.

Q10. If the quantum supremacy experiments are just generating random bits, isn’t that uninteresting? Isn’t it trivial to convert qubits into random bits, just by measuring them?

The key is that a quantum supremacy experiment doesn’t generate uniform random bits. Instead, it samples from some complicated, correlated probability distribution over 50- or 60-bit strings. In my certified randomness protocol, the deviations from uniformity play a central role in how the QC convinces a classical skeptic that it really was sampling the bits randomly, rather than in some secretly deterministic way (e.g., using a pseudorandom generator).

Q11. Haven’t decades of quantum-mechanical experiments–for example, the ones that violated the Bell inequality–already demonstrated quantum supremacy?

This is purely a confusion over words. Those other experiments demonstrated other forms of “quantum supremacy”: for example, in the case of Bell inequality violations, what you could call “quantum correlational supremacy.” They did not demonstrate quantum computational supremacy, meaning doing something that’s infeasible to simulate using a classical computer (where the classical simulation has no restrictions of spatial locality or anything else of that kind). Today, when people use the phrase “quantum supremacy,” it’s generally short for quantum computational supremacy.

Q12. Even so, there are countless examples of materials and chemical reactions that are hard to classically simulate, as well as special-purpose quantum simulators (like those of Lukin’s group at Harvard). Why don’t these already count as quantum computational supremacy?

Under some people’s definitions of “quantum computational supremacy,” they do! The key difference with Google’s effort is that they have a fully programmable device—one that you can program with an arbitrary sequence of nearest-neighbor 2-qubit gates, just by sending the appropriate signals from your classical computer.

In other words, it’s no longer open to the QC skeptics to sneer that, sure, there are quantum systems that are hard to simulate classically, but that’s just because nature is hard to simulate, and you don’t get to arbitrarily redefine whatever random chemical you find in the wild to be a “computer for simulating itself.” Under any sane definition, the superconducting devices that Google, IBM, and others are now building are indeed “computers.”

Q13. Did you (Scott Aaronson) invent the concept of quantum supremacy?

No. I did play some role in developing it, which led to Sabine Hossenfelder among others generously overcrediting me for the whole idea. The term “quantum supremacy” was coined by John Preskill in 2012, though in some sense the core concept goes back to the beginnings of quantum computing itself in the early 1980s. In 1993, Bernstein and Vazirani explicitly pointed out the severe apparent tension between quantum mechanics and the Extended Church-Turing Thesis of classical computer science. Then, in 1994, the use of Shor’s algorithm to factor a huge number became the quantum supremacy experiment par excellence—albeit, one that’s still (in 2019) much too hard to perform.

The key idea of instead demonstrating quantum supremacy using a sampling problem was, as far as I know, first suggested by Barbara Terhal and David DiVincenzo, in a farsighted paper from 2002. The “modern” push for sampling-based supremacy experiments started around 2011, when Alex Arkhipov and I published our paper on BosonSampling, and (independently of us) Bremner, Jozsa, and Shepherd published their paper on the commuting Hamiltonians model. These papers showed, not only that “simple,” non-universal quantum systems can solve apparently-hard sampling problems, but also that an efficient classical algorithm for the same sampling problems would imply a collapse of the polynomial hierarchy. Arkhipov and I also made a start toward arguing that even the approximate versions of quantum sampling problems can be classically hard.

As far as I know, the idea of “Random Circuit Sampling”—that is, generating your hard sampling problem by just picking a random sequence of 2-qubit gates in (say) a superconducting architecture—originated in an email thread that I started in December 2015, which also included John Martinis, Hartmut Neven, Sergio Boixo, Ashley Montanaro, Michael Bremner, Richard Jozsa, Aram Harrow, Greg Kuperberg, and others. The thread was entitled “Hard sampling problems with 40 qubits,” and my email began “Sorry for the spam.” I then discussed some advantages and disadvantages of three options for demonstrating sampling-based quantum supremacy: (1) random circuits, (2) commuting Hamiltonians, and (3) BosonSampling. After Greg Kuperberg chimed in to support option (1), a consensus quickly formed among the participants that (1) was indeed the best option from an engineering standpoint—and that, if the theoretical analysis wasn’t yet satisfactory for (1), then that was something we could remedy.

[Update: Sergio Boixo tells me that, internally, the Google group had been considering the idea of random circuit sampling since February 2015, even before my email thread. This doesn’t surprise me: while there are lots of details that had to be worked out, the idea itself is an extremely natural one.]

After that, the Google group did a huge amount of analysis of random circuit sampling, both theoretical and numerical, while Lijie Chen and I and Bouland et al. supplied different forms of complexity-theoretic evidence for the problem’s classical hardness.

Q14. If quantum supremacy was achieved, what would it mean for the QC skeptics?

I wouldn’t want to be them right now! They could retreat to the position that of course quantum supremacy is possible (who ever claimed that it wasn’t? surely not them!), that the real issue has always been quantum error-correction. And indeed, some of them have consistently maintained that position all along. But others, including my good friend Gil Kalai, are on record, right here on this blog predicting that even quantum supremacy can never be achieved for fundamental reasons. I won’t let them wiggle out of it now.

[Update: As many of you will have seen, Gil Kalai has taken the position that the Google result won’t stand and will need to be retracted. He asked for more data: specifically, a complete histogram of the output probabilities for a smaller number of qubits. This turns out to be already available, in a Science paper from 2018.]

Q15. What’s next?

If it’s achieved quantum supremacy, then I think the Google group already has the requisite hardware to demonstrate my protocol for generating certified random bits. And that’s indeed one of the very next things they’re planning to do.

[Addendum: Also, of course, the evidence for quantum supremacy itself can be made stronger and various loopholes closed—for example, by improving the fidelity so that fewer samples need to be taken (something that Umesh Vazirani tells me he’d like to see), by having the circuit C be generated and the outputs verified by a skeptic external to Google. and simply by letting more time pass, so outsiders can have a crack at simulating the results classically. My personal guess is that the basic picture is going to stand, but just like with the first experiments that claimed to violate the Bell inequality, there’s still plenty of room to force the skeptics into a tinier corner.]

Beyond that, one obvious next milestone would be to use a programmable QC, with (say) 50-100 qubits, to do some useful quantum simulation (say, of a condensed-matter system) much faster than any known classical method could do it. A second obvious milestone would be to demonstrate the use of quantum error-correction, to keep an encoded qubit alive for longer than the underlying physical qubits remain alive. There’s no doubt that Google, IBM, and the other players will now be racing toward both of these milestones.

[Update: Steve Girvin reminds me that the Yale group has already achieved quantum error-correction “beyond the break-even point,” albeit in a bosonic system rather than superconducting qubits. So perhaps a better way to phrase the next milestone would be: achieve quantum computational supremacy and useful quantum error-correction in the same system.]

Another update: I thought this IEEE Spectrum piece gave a really nice overview of the issues.

Last update: John Preskill’s Quanta column about quantum supremacy is predictably excellent (and possibly a bit more accessible than this FAQ).

Blurry but clear enough

Friday, September 20th, 2019

My vision is blurry right now, because yesterday I had a procedure called corneal cross-linking, intended to prevent further deterioration of my eyes as I get older. But I can see clearly enough to tap out a post with random thoughts about the world.

I’m happy that the Netanyahu era might finally be ending in Israel, after which Netanyahu will hopefully face some long-delayed justice for his eye-popping corruption. If only there were a realistic prospect of Trump facing similar justice. I wish Benny Gantz success in putting together a coalition.

I’m happy that my two least favorite candidates, Bill de Blasio and Kirsten Gillibrand, have now both dropped out of the Democratic primary. Biden, Booker, Warren, Yang—I could enthusiastically support pretty much any of them, if they looked like they had a good chance to defeat Twitler. Let’s hope.

Most importantly, I wish to register my full-throated support for the climate strikes taking place today all over the world, including here in Austin. My daughter Lily, age 6, is old enough to understand the basics of what’s happening and to worry about her future. I urge the climate strikers to keep their eyes on things that will actually make a difference (building new nuclear plants, carbon taxes, geoengineering) and ignore what won’t (banning plastic straws).

As for Greta Thunberg: she is, or is trying to be, the real-life version of the Comet King from Unsong. You can make fun of her, ask what standing or expertise she has as some random 16-year-old to lead a worldwide movement. But I suspect that this is always what it looks like when someone takes something that’s known to (almost) all, and then makes it common knowledge. If civilization makes it to the 22nd century at all, then in whatever form it still exists, I can easily imagine that it will have more statues of Greta than of MLK or Gandhi.

On a completely unrelated and much less important note, John Horgan has a post about “pluralism in math” that includes some comments by me.

Oh, and on the quantum supremacy front—I foresee some big news very soon. You know which blog to watch for more.

Paul Bernays Lectures

Monday, September 9th, 2019

Last week, I had the honor of giving the annual Paul Bernays Lectures at ETH Zürich. My opening line: “as I look at the list of previous Bernays Lecturers—many of them Nobel physics laureates, Fields Medalists, etc.—I think to myself, how badly did you have to screw up this year in order to end up with me?”

Paul Bernays was the primary assistant to David Hilbert, before Bernays (being Jewish by birth) was forced out of Göttingen by the Nazis in 1933. He spent most of the rest of his career at ETH. He’s perhaps best known for the von Neumann-Bernays-Gödel set theory, and for writing (in a volume by “Hilbert and Bernays,” but actually just Bernays) arguably the first full proof of Gödel’s Second Incompleteness Theorem.

Anyway, the idea of the Paul Bernays Lectures is to rotate between Bernays’s different interests in his long, distinguished career—interests that included math, philosophy, logic, and the foundations of physics. I mentioned that, if there’s any benefit to carting me out to Switzerland for these lectures, it’s that quantum computing theory combines all of these interests. And this happens to be the moment in history right before we start finding out, directly from experiments, whether quantum computers can indeed solve certain special problems much faster.

The general theme for my three lectures was “Quantum Computing and the Fundamental Limits of Computation.” The attendance was a few hundred. My idea was to take the audience from Church and Turing in the 1930s, all the way to the quantum computational supremacy experiments that Google and others are doing now—as part of a single narrative.

If you’re interested, streaming video of the lectures is available as of today (though I haven’t watched it—let me know if the quality is OK!), as well as of course my slides. Here you go:

Lecture 1: The Church-Turing Thesis and Physics (watch streaming / PowerPoint slides) (with an intro in German by Giovanni Sommaruga, who knew Bernays, and a second intro in English by Renato Renner, who appeared on this blog here)

Abstract: Is nature computable?  What should we even mean in formulating such a question?  For generations, the identification of “computable” with “computable by a Turing machine” has been seen as either an arbitrary mathematical definition, or a philosophical or psychological claim. The rise of quantum computing and information, however, has brought a fruitful new way to look at the Church-Turing Thesis: namely, as a falsifiable empirical claim about the physical universe.  This talk seeks to examine the computability of the laws of physics from a modern standpoint—one that fully incorporates the insights of quantum mechanics, quantum field theory, quantum gravity, and cosmology.  We’ll critically assess ‘hypercomputing’ proposals involving (for example) relativistic time dilation, black holes, closed timelike curves, and exotic cosmologies, and will make a 21st-century case for the physical Church-Turing Thesis.

Lecture 2: The Limits of Efficient Computation (watch streaming / PowerPoint slides)

Abstract: Computer scientists care about what’s computable not only in principle, but within the resource constraints of the physical universe.  Closely related, which types of problems are solvable using a number of steps that scales reasonably (say, polynomially) with the problem size?  This lecture will examine whether the notorious NP-complete problems, like the Traveling Salesman Problem, are efficiently solvable using the resources of the physical world.  We’ll start with P=?NP problem of classical computer science—its meaning, history, and current status.  We’ll then discuss quantum computers: how they work, how they can sometimes yield exponential speedups over classical computers, and why many believe that not even they will do so for the NP-complete problems.  Finally, we’ll critically assess proposals that would use exotic physics to go even beyond quantum computers, in terms of what they would render computable in polynomial time.

Lecture 3: The Quest for Quantum Computational Supremacy (watch streaming / PowerPoint slides)

Abstract: Can useful quantum computers be built in our world?  This talk will discuss the current status of the large efforts currently underway at Google, IBM, and many other places to build noisy quantum devices, with 50-100 qubits, that can clearly outperform classical computers at least on some specialized tasks — a milestone that’s been given the unfortunate name of “quantum supremacy.”  We’ll survey recent theoretical work (on BosonSampling, random circuit sampling, and more) that aims to tell us: which problems should we give these devices, that we’re as confident as possible are hard for classical computers? And how should we check whether the devices indeed solved them?  We’ll end by discussing a new protocol, for generating certified random bits, that can be implemented almost as soon as quantum supremacy itself is achieved, and which might therefore become the first application of quantum computing to be realized.

Finally, thanks so much to Giovanni Sommaruga and everyone else at ETH for arranging a fantastic visit.

Links, proofs, talks, jokes

Tuesday, July 30th, 2019

For those who haven’t yet seen it, Erica Klarreich has a wonderful article in Quanta on Hao Huang’s proof of the Sensitivity Conjecture. This is how good popular writing about math can be.

Klarreich quotes my line from this blog, “I find it hard to imagine that even God knows how to prove the Sensitivity Conjecture in any simpler way than this.” However, even if God doesn’t know a simpler proof, that of course doesn’t rule out the possibility that Don Knuth does! And indeed, a couple days ago Knuth posted his own variant of Huang’s proof on his homepage—in Knuth’s words, fleshing out the argument that Shalev Ben-David previously posted on this blog—and then left a comment about it here, the first comment by Knuth that I know about on this blog or any other blog. I’m honored—although as for whether the variants that avoid the Cauchy Interlacing Theorem are actually “simpler,” I guess I’ll leave that between Huang, Ben-David, Knuth, and God.

In Communications of the ACM, Samuel Greengard has a good, detailed article on Ewin Tang and her dequantization of the quantum recommendation systems algorithm. One warning (with thanks to commenter Ted): the sentence “The only known provable separation theorem between quantum and classical is sqrt(n) vs. n” is mistaken, though it gestures in the direction of a truth. In the black-box setting, we can rigorously prove all sorts of separations: sqrt(n) vs. n (for Grover search), exponential (for period-finding), and more. In the non-black-box setting, we can’t prove any such separations at all.

Last week I returned to the US from the FQXi meeting in the Tuscan countryside. This year’s theme was “Mind Matters: Intelligence and Agency in the Physical World.” I gave a talk entitled “The Search for Physical Correlates of Consciousness: Lessons from the Failure of Integrated Information Theory” (PowerPoint slides here), which reprised my blog posts critical of IIT from five years ago. There were thought-provoking talks by many others who might be known to readers of this blog, including Sean Carroll, David Chalmers, Max Tegmark, Seth Lloyd, Carlo Rovelli, Karl Friston … you can see the full schedule here. Apparently video of the talks is not available yet but will be soon.

Let me close this post by sharing two important new insights about quantum mechanics that emerged from my conversations at the FQXi meeting:

(1) In Hilbert space, no one can hear you scream. Unless, that is, you scream the exact same way everywhere, or unless you split into separate copies, one for each different way of screaming.

(2) It’s true that, as a matter of logic, the Schrödinger equation does not imply the Born Rule. Having said that, if the Schrödinger equation were leading a rally, and the crowd started a chant of “BORN RULE! BORN RULE! BORN RULE!”—the Schrödinger equation would just smile and wait 13 seconds for the chant to die down before continuing.

John Wright joins UT Austin

Wednesday, July 3rd, 2019

I’m delighted to announce that quantum computing theorist John Wright will be joining the computer science faculty at UT Austin in Fall 2020, after he finishes a one-year postdoc at Caltech.

John made an appearance on this blog a few months ago, when I wrote about the new breakthrough by him and Anand Natarajan: namely, that MIP* (multi-prover interactive proofs with entangled provers) contains NEEXP (nondeterministic double-exponential time). Previously, MIP* had only been known to contain NEXP (nondeterministic single exponential time). So, this is an exponential expansion in the power of entangled provers over what was previously known and believed, and the first proof that entanglement actually increases the power of multi-prover protocols, rather than decreasing it (as it could’ve done a priori). Even more strikingly, there seems to be no natural stopping point: MIP* might soon swallow up arbitrary towers of exponentials or even the halting problem (!). For more, see for example this Quanta article, or this post by Thomas Vidick, or this short story [sic] by Henry Yuen.

John grew up in Texas, so he’s no stranger to BBQ brisket or scorching weather. He did his undergrad in computer science at UT Austin—my colleagues remember him as a star—and then completed his PhD with Ryan O’Donnell at Carnegie Mellon, followed by a postdoc at MIT. Besides the work on MIP*, John is also well-known for his 2015 work with O’Donnell pinning down the sample complexity of quantum state tomography. Their important result, a version of which was independently obtained by Haah et al., says that if you want to learn an unknown d-dimensional quantum mixed state ρ to a reasonable precision, then ~d2 copies of ρ are both necessary and sufficient. This solved a problem that had personally interested me, and already plays a role in, e.g., my work on shadow tomography and gentle measurements.

Our little quantum information center at UT Austin is growing rapidly. Shyam Shankar, a superconducting qubits guy who previously worked in Michel Devoret’s group at Yale, will also be joining UT’s Electrical and Computer Engineering department this fall. I’ll have two new postdocs—Andrea Rocchetto and Yosi Atia—as well as new PhD students. We’ll continue recruiting this coming year, with potential opportunities for students, postdocs, faculty, and research scientists across the CS, physics, and ECE departments as well as the Texas Advanced Computing Center (TACC). I hope you’ll consider applying to join us.

With no evaluative judgment attached, I can honestly say that this is an unprecedented time for quantum computing as a field. Where once faculty applicants struggled to make a case for quantum computing (physics departments: “but isn’t this really CS?” / CS departments: “isn’t it really physics?” / everyone: “couldn’t this whole QC thing, like, all blow over in a year?”), today departments are vying with each other and with industry players and startups to recruit talented people. In such an environment, we’re fortunate to be doing as well as we are. We hope to continue to expand.

Meanwhile, this was an unprecedented year for CS hiring at UT Austin more generally. John Wright is one of at least four new faculty (probably more) who will be joining us. It’s a good time to be in CS.

A huge welcome to John, and hook ’em Hadamards!

(And for US readers: have a great 4th! Though how could any fireworks match the proof of the Sensitivity Conjecture?)

On the scientific accuracy of “Avengers: Endgame”

Friday, May 3rd, 2019

[BY REQUEST: SPOILERS FOLLOW]

Today Ben Lindbergh, a writer for The Ringer, put out an article about the scientific plausibility (!) of the time-travel sequences in the new “Avengers” movie. The article relied on two interviewees:

(1) David Deutsch, who confirmed that he has no idea what the “Deutsch proposition” mentioned by Tony Stark refers to but declined to comment further, and

(2) some quantum computing dude from UT Austin who had no similar scruples about spouting off on the movie.

To be clear, the UT Austin dude hadn’t even seen the movie, or any of the previous “Avengers” movies for that matter! He just watched the clips dealing with time travel. Yet Lindbergh still saw fit to introduce him as “a real-life [Tony] Stark without the vast fortune and fancy suit.” Hey, I’ll take it.

Anyway, if you’ve seen the movie, and/or you know Deutsch’s causal consistency proposal for quantum closed timelike curves, and you can do better than I did at trying to reconcile the two, feel free to take a stab in the comments.

Wonderful world

Thursday, April 11th, 2019

I was saddened about the results of the Israeli election. The Beresheet lander, which lost its engine and crashed onto the moon as I was writing this, seems like a fitting symbol for where the country is now headed. Whatever he was at the start of his career, Netanyahu has become the Israeli Trump—a breathtakingly corrupt strongman who appeals to voters’ basest impulses, sacrifices his country’s future and standing in the world for short-term electoral gain, considers himself unbound by laws, recklessly incites hatred of minorities, and (despite zero personal piety) cynically panders to religious fundamentalists who help keep him in power. Just like with Trump, it’s probably futile to hope that lawyers will swoop in and free the nation from the demagogue’s grip: legal systems simply aren’t designed for assaults of this magnitude.

(If, for example, you were designing the US Constitution, how would you guard against a presidential candidate who openly supported and was supported by a hostile foreign power, and won anyway? Would it even occur to you to include such possibilities in your definitions of concepts like “treason” or “collusion”?)

The original Zionist project—the secular, democratic vision of Herzl and Ben-Gurion and Weizmann and Einstein, which the Nazis turned from a dream to a necessity—matters more to me than most things in this world, and that was true even before I’d spent time in Israel and before I had a wife and kids who are Israeli citizens. It would be depressing if, after a century of wildly improbable victories against external threats, Herzl’s project were finally to eat itself from the inside. Of course I have analogous worries (scaled up by two orders of magnitude) for the US—not to mention the UK, India, Brazil, Poland, Hungary, the Philippines … the world is now engaged in a massive test of whether Enlightenment liberalism can long endure, or whether it’s just a metastable state between one Dark Age and the next. (And to think that people have accused me of uncritical agreement with Steven Pinker, the world’s foremost optimist!)

In times like this, one takes one’s happiness where one can find it.

So, yeah: I’m happy that there’s now an “image of a black hole” (or rather, of radio waves being bent around a black hole’s silhouette). If you really want to understand what the now-ubiquitous image is showing, I strongly recommend this guide by Matt Strassler.

I’m happy that integer multiplication can apparently now be done in O(n log n), capping a decades-long quest (see also here).

I’m happy that there’s now apparently a spectacular fossil record of the first minutes after the asteroid impact that ended the Cretaceous period. Even better will be if this finally proves that, yes, some non-avian dinosaurs were still alive on impact day, and had not gone extinct due to unrelated climate changes slightly earlier. (Last week, my 6-year-old daughter sang a song in a school play about how “no one knows what killed the dinosaurs”—the verses ran through the asteroid and several other possibilities. I wonder if they’ll retire that song next year.) If you haven’t yet read it, the New Yorker piece on this is a must.

I’m happy that my good friend Zach Weinersmith (legendary author of SMBC Comics), as well as the GMU economist Bryan Caplan (rabblerousing author of The Case Against Education, which I reviewed here), have a new book out: a graphic novel that makes a moral and practical case for open borders (!). Their book is now available for pre-order, and at least at some point cracked Amazon’s top 10. Just last week I met Bryan for the first time, when he visited UT Austin to give a talk based on the book. He said that meeting me (having known me only from the blog) was like meeting a fictional character; I said the feeling was mutual. And as for Bryan’s talk? It felt great to spend an hour visiting a fantasyland where the world’s economies are run by humane rationalist technocrats, and where walls are going down rather than up.

I’m happy that, according to this Vanity Fair article, Facebook will still ban you for writing that “men are scum” or that “women are scum”—having ultimately rejected the demands of social-justice activists that it ban only the latter sentence, not the former. According to the article, everyone on Facebook’s Community Standards committee agreed with the activists that this was the right result: dehumanizing comments about women have no place on the platform, while (superficially) dehumanizing comments about men are an important part of feminist consciousness-raising that require protection. The problem was simply that the committee couldn’t come up with any general principle that would yield that desired result, without also yielding bad results in other cases.

I’m happy that the 737 Max jets are grounded and will hopefully be fixed, with no thanks to the FAA. Strangely, while this was still the top news story, I gave a short talk about quantum computing to a group of Boeing executives who were visiting UT Austin on a previously scheduled trip. The title of the session they put me in? “Disruptive Computation.”

I’m happy that Greta Thunberg, the 15-year-old Swedish climate activist, has attracted a worldwide following and might win the Nobel Peace Prize. I hope she does—and more importantly, I hope her personal story will help galvanize the world into accepting things that it already knows are true, as with the example of Anne Frank (or for that matter, Gandhi or MLK). Confession: back when I was an adolescent, I often daydreamed about doing exactly what Thunberg is doing right now, leading a worldwide children’s climate movement. I didn’t, of course. In my defense, I wouldn’t have had the charisma for it anyway—and also, I got sidetracked by even more pressing problems, like working out the quantum query complexity of recursive Fourier sampling. But fate seems to have made an excellent choice in Thunberg. She’s not the prop of any adult—just a nerdy girl with Asperger’s who formed the idea to sit in front of Parliament every day to protest the destruction of the world, because she couldn’t understand why everyone else wasn’t.

I’m happy that the college admissions scandal has finally forced Americans to confront the farcical injustice of our current system—a system where elite colleges pretend to peer into applicants’ souls (or the souls of the essay consultants hired by the applicants’ parents?), and where they preen about the moral virtue of their “holistic, multidimensional” admissions criteria, which amount in practice to shutting out brilliant working-class Asian kids in favor of legacies and rich badminton players. Not to horn-toot, but Steven Pinker and I tried to raise the alarm about this travesty five years ago (see for example this post), and were both severely criticized for it. I do worry, though, that people will draw precisely the wrong lessons from the scandal. If, for example, a few rich parents resorted to outright cheating on the SAT—all their other forms of gaming and fraud apparently being insufficient—then the SAT itself must be to blame so we should get rid of it. In reality, the SAT (whatever its flaws) is almost the only bulwark we have against the complete collapse of college admissions offices into nightclub bouncers. This Quillette article says it much better than I can.

I’m happy that there will a symposium from May 6-9 at the University of Toronto, to honor Stephen Cook and the (approximate) 50th anniversary of the discovery of NP-completeness. I’m happy that I’ll be attending and speaking there. If you’re interested, there’s still time to register!

Finally, I’m happy about the following “Sierpinskitaschen” baked by CS grad student and friend-of-the-blog Jess Sorrell, and included with her permission (Jess says she got the idea from Debs Gardner).

Image may contain: food

Congratulations!

Friday, April 5th, 2019

Congrats to Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, who won the 2018 Turing Award for their work on deep learning (i.e., what used to be called neural nets). This might be the first Turing Award ever given for something where no one really understands why it works … and it’s years overdue.

Congrats to Avi Wigderson for winning the Knuth Prize. When I was asked to write a supporting nomination letter, my first suggestion was to submit a blank sheet of paper—since for anyone in theoretical computer science, there’s nothing that needs to be said about why Avi should win any awards we have. I hope Avi remains a guiding light of our community for many years to come.

And congrats to Mark Braverman for winning the Alan T. Waterman Award, one that I have some personal fondness for, along with materials scientist Jennifer Dionne. As Sasha Razborov once put it, after he (Sasha), I, and others recoiled from the task of proving the Linial-Nisan Conjecture, that polylog-wise independent distributions are indistinguishable from uniform by AC0 circuits, a “braver man” stepped in to do the job.

Beware of fake FOCS site!

Wednesday, April 3rd, 2019

As most of you in theoretical computer science will know, the submission deadline for the 2019 FOCS conference is this Friday, April 5. The FOCS’2019 program committee chair, my UT Austin colleague David Zuckerman, has asked me to warn everyone that a fake submission site was set up at aconf.org—apparently as a phishing scam—and is one of the first results to come up when you google “FOCS 2019.” Do not submit there! The true URL is focs2019.cs.jhu.edu; accept no substitutes!

Anyway, I’ve been thrashing for several weeks—just barely escaping spaghettification at the Email Event Horizon—but I hope to be back shortly with your regularly scheduled programming.

Four updates

Tuesday, February 12th, 2019

A few weeks ago, I was at QIP’2019 in Boulder, CO. This week I was at SQuInT’2019 in Albuquerque, NM. There were lots of amazing talks—feel free to ask in the comments section.

There’s an interview with me at the website “GigaOm,” conducted by Byron Reese and entitled Quantum Computing: Capabilities and Limits. I didn’t proofread the transcript and it has some errors in it, but hopefully the meaning comes through. In other interview news, if you were interested in my podcast with Adam Ford in Melbourne but don’t like YouTube, Adam has helpfully prepared transcripts of the two longest segments: The Ghost in the Quantum Turing Machine and The Winding Road to Quantum Supremacy.

The New York Times ran an article entitled The Hard Part of Computer Science? Getting Into Class, about the surge in computer science majors all over the US, and the shortage of professors to teach them. The article’s go-to example of a university where this is happening is UT Austin, and there’s extensive commentary from my department chair, Don Fussell.

The STOC’2019 accepted papers list is finally out. Lots of cool stuff!