## Archive for the ‘Bell’s Theorem? But a Flesh Wound!’ Category

### My New York Times op-ed on quantum supremacy

Wednesday, October 30th, 2019

I’d like to offer special thanks to the editor in charge, Eleanor Barkhorn, who commissioned this piece and then went way, way beyond the call of duty to get it right—including relaxing the usual length limit to let me squeeze in amplitudes and interference, and working late into the night to fix last-minute problems. Obviously I take sole responsibility for whatever errors remain.

Of course a lot of material still ended up on the cutting room floor, including a little riff about Andrew Yang’s tweet that because of quantum supremacy, now “no code is uncrackable,” as well as Ivanka Trump’s tweet giving credit for Google’s experiment (one that Google was working toward since 2015) partly to her father’s administration.

While I’m posting: those of a more technical bent might want to check out my new short preprint with UT undergraduate Sam Gunn, where we directly study the complexity-theoretic hardness of spoofing Google’s linear cross-entropy benchmark using a classical computer. Enjoy!

### Quantum supremacy: the gloves are off

Wednesday, October 23rd, 2019

New York Times article
IBM paper and blog post responding to Google’s announcement
Boaz Barak’s new post: “Boaz’s inferior classical inferiority FAQ”
Lipton and Regan’s post
My quantum supremacy interview with the BBC (featuring some of my fewest “uhms” and “ahs” ever!)
NEW: My preprint with Sam Gunn, On the Classical Hardness of Spoofing Linear Cross-Entropy Benchmarking
My interview on NPR affiliate WOSU (starts around 16:30)

When Google’s quantum supremacy paper leaked a month ago—not through Google’s error, but through NASA’s—I had a hard time figuring out how to cover the news here. I had to say something; on the other hand, I wanted to avoid any detailed technical analysis of the leaked paper, because I was acutely aware that my colleagues at Google were still barred by Nature‘s embargo rules from publicly responding to anything I or others said. (I was also one of the reviewers for the Nature paper, which put additional obligations on me.)

I ended up with Scott’s Supreme Quantum Supremacy FAQ, which tried to toe this impossible line by “answering general questions about quantum supremacy, and the consequences of its still-hypothetical achievement, in light of the leak.” It wasn’t an ideal solution—for one thing, because while I still regard Google’s sampling experiment as a historic milestone for our whole field, there are some technical issues, aspects that subsequent experiments (hopefully coming soon) will need to improve. Alas, the ground rules of my FAQ forced me to avoid such issues, which caused some readers to conclude mistakenly that I didn’t think there were any.

Now, though, the Google paper has come out as Nature‘s cover story, at the same time as there have been new technical developments—most obviously, the paper from IBM (see also their blog post) saying that they could simulate the Google experiment in 2.5 days, rather than the 10,000 years that Google had estimated.

(Yesterday I was deluged by emails asking me “whether I’d seen” IBM’s paper. As a science blogger, I try to respond to stuff pretty quickly when necessary, but I don’t—can’t—respond in Twitter time.)

So now the gloves are off. No more embargo. Time to address the technical stuff under the hood—which is the purpose of this post.

I’m going to assume, from this point on, that you already understand the basics of sampling-based quantum supremacy experiments, and that I don’t need to correct beginner-level misconceptions about what the term “quantum supremacy” does and doesn’t mean (no, it doesn’t mean scalability, fault-tolerance, useful applications, breaking public-key crypto, etc. etc.). If this is not the case, you could start (e.g.) with my FAQ, or with John Preskill’s excellent Quanta commentary.

(1) So what about that IBM thing? Are random quantum circuits easy to simulate classically?

OK, so let’s carefully spell out what the IBM paper says. They argue that, by commandeering the full attention of Summit at Oak Ridge National Lab, the most powerful supercomputer that currently exists on Earth—one that fills the area of two basketball courts, and that (crucially) has 250 petabytes of hard disk space—one could just barely store the entire quantum state vector of Google’s 53-qubit Sycamore chip in hard disk.  And once one had done that, one could simulate the chip in ~2.5 days, more-or-less just by updating the entire state vector by brute force, rather than the 10,000 years that Google had estimated on the basis of my and Lijie Chen’s “Schrödinger-Feynman algorithm” (which can get by with less memory).

The IBM group understandably hasn’t actually done this yet—even though IBM set it up, the world’s #1 supercomputer isn’t just sitting around waiting for jobs! But I see little reason to doubt that their analysis is basically right. I don’t know why the Google team didn’t consider how such near-astronomical hard disk space would change their calculations; probably they wish they had.

I find this to be much, much better than IBM’s initial reaction to the Google leak, which was simply to dismiss the importance of quantum supremacy as a milestone. Designing better classical simulations is precisely how IBM and others should respond to Google’s announcement, and how I said a month ago that I hoped they would respond. If we set aside the pass-the-popcorn PR war (or even if we don’t), this is how science progresses.

But does IBM’s analysis mean that “quantum supremacy” hasn’t been achieved? No, it doesn’t—at least, not under any definition of “quantum supremacy” that I’ve ever used. The Sycamore chip took about 3 minutes to generate the ~5 million samples that were needed to pass the “linear cross-entropy benchmark”—the statistical test that Google applies to the outputs of its device.

(Technical note added: Google’s samples are extremely noisy—the actual distribution being sampled from is something like 0.998U+0.002D, where U is the uniform distribution and D is the hard distribution that you want. What this means, in practice, is that you need to take a number of samples that’s large compared to 1/0.0022, in order to extract a signal corresponding to D. But the good news is that Google can take that many samples in just a few minutes, since once the circuit has been loaded onto the chip, generating each sample takes only about 40 microseconds. And once you’ve done this, what hardness results we have for passing the linear cross-entropy test—to be discussed later in this post—apply basically just as well as if you’d taken a single noiseless sample.)

Anyway, you might notice that three minutes versus 2.5 days is still a quantum speedup by a factor of 1200. But even more relevant, I think, is to compare the number of “elementary operations.” Let’s generously count a FLOP (floating-point operation) as the equivalent of a quantum gate. Then by my estimate, we’re comparing ~5×109 quantum gates against ~2×1020 FLOPs—a quantum speedup by a factor of ~40 billion.

For me, though, the broader point is that neither party here—certainly not IBM—denies that the top-supercomputers-on-the-planet-level difficulty of classically simulating Google’s 53-qubit programmable chip really is coming from the exponential character of the quantum states in that chip, and nothing else. That’s what makes this back-and-forth fundamentally different from the previous one between D-Wave and the people who sought to simulate its devices classically. The skeptics, like me, didn’t much care what speedup over classical benchmarks there was or wasn’t today: we cared about the increase in the speedup as D-Wave upgraded its hardware, and the trouble was that we never saw a convincing case that there would be one. I’m a theoretical computer scientist, and this is what I believe: that after the constant factors have come and gone, what remains are asymptotic growth rates.

In the present case, while increasing the circuit depth won’t evade IBM’s “store everything to hard disk” strategy, increasing the number of qubits will. If Google, or someone else, upgraded from 53 to 55 qubits, that would apparently already be enough to exceed Summit’s 250-petabyte storage capacity. At 60 qubits, you’d need 33 Summits. At 70 qubits, enough Summits to fill a city … you get the idea.

From the beginning, it was clear that quantum supremacy would not be a milestone like the moon landing—something that’s achieved in a moment, and is then clear to everyone for all time. It would be more like eradicating measles: it could be achieved, then temporarily unachieved, then re-achieved. For by definition, quantum supremacy all about beating something—namely, classical computation—and the latter can, at least for a while, fight back.

As Boaz Barak put it to me, the current contest between IBM and Google is analogous to Kasparov versus Deep Blueexcept with the world-historic irony that IBM is playing the role of Kasparov! In other words, Kasparov can put up a heroic struggle, during a “transitional period” that lasts a year or two, but the fundamentals of the situation are that he’s toast. If Kasparov had narrowly beaten Deep Blue in 1997, rather than narrowly losing, the whole public narrative would likely have been different (“humanity triumphs over computers after all!”). Yet as Kasparov himself well knew, the very fact that the contest was close meant that, either way, human dominance would soon end for good.

Let me leave the last word on this to friend-of-the-blog Greg Kuperberg, who graciously gave me permission to quote his comments about the IBM paper.

I’m not entirely sure how embarrassed Google should feel that they overlooked this.   I’m sure that they would have been happier to anticipate it, and happier still if they had put more qubits on their chip to defeat it.   However, it doesn’t change their real achievement.

I respect the IBM paper, even if the press along with it seems more grouchy than necessary.   I tend to believe them that the Google team did not explore all avenues when they said that their 53 qubits aren’t classically simulable.   But if this is the best rebuttal, then you should still consider how much Google and IBM still agree on this as a proof-of-concept of QC.   This is still quantum David vs classical Goliath, in the extreme.   53 qubits is in some ways still just 53 bits, only enhanced with quantum randomness.  To answer those 53 qubits, IBM would still need entire days of computer time with the world’s fastest supercomputer, a 200-petaflop machine with hundreds of thousands of processing cores and trillions of high-speed transistors.   If we can confirm that the Google chip actually meets spec, but we need this much computer power to do it, then to me that’s about as convincing as a larger quantum supremacy demonstration that humanity can no longer confirm at all.

Honestly, I’m happy to give both Google and IBM credit for helping the field of QC, even if it is the result of a strange dispute.

I should mention that, even before IBM’s announcement, Johnnie Gray, a postdoc at Imperial College, gave a talk (abstract here) at Caltech’s Institute for Quantum Information with a proposal for a different faster way to classically simulate quantum circuits like Google’s—in this case, by doing tensor network contraction more cleverly. Unlike both IBM’s proposed brute-force simulation, and the Schrödinger-Feynman algorithm that Google implemented, Gray’s algorithm (as far as we know now) would need to be repeated k times if you wanted k independent samples from the hard distribution. Partly because of this issue, Gray’s approach doesn’t currently look competitive for simulating thousands or millions of samples, but we’ll need to watch it and see what happens.

(2) Direct versus indirect verification.

The discussion of IBM’s proposed simulation brings us to a curious aspect of the Google paper—one that was already apparent when Nature sent me the paper for review back in August. Namely, Google took its supremacy experiments well past the point where even they themselves knew how to verify the results, by any classical computation that they knew how to perform feasibly (say, in less than 10,000 years).

So you might reasonably ask: if they couldn’t even verify the results, then how did they get to claim quantum speedups from those experiments? Well, they resorted to various gambits, which basically involved estimating the fidelity on quantum circuits that looked almost the same as the hard circuits, but happened to be easier to simulate classically, and then making the (totally plausible) assumption that that fidelity would be maintained on the hard circuits. Interestingly, they also cached their outputs and put them online (as part of the supplementary material to their Nature paper), in case it became feasible to verify them in the future.

Maybe you can now see where this is going. From Google’s perspective, IBM’s rainstorm comes with a big silver lining. Namely, by using Summit, hopefully it will now be possible to verify Google’s hardest (53-qubit and depth-20) sampling computations directly! This should provide an excellent test, since not even the Google group themselves would’ve known how to cheat and bias the results had they wanted to.

This whole episode has demonstrated the importance, when doing a sampling-based quantum supremacy experiment, of going deep into the regime where you can no longer classically verify the outputs, as weird as that sounds. Namely, you need to leave yourself a margin, in the likely event that the classical algorithms improve!

Having said that, I don’t mind revealing at this point that the lack of direct verification of the outputs, for the largest reported speedups, was my single biggest complaint when I reviewed Google’s Nature submission. It was because of my review that they added a paragraph explicitly pointing out that they did do direct verification for a smaller quantum speedup:

The largest circuits for which the fidelity can still be directly verified have 53 qubits and a simplified gate arrangement. Performing random circuit sampling on these at 0.8% fidelity takes one million cores 130 seconds, corresponding to a million-fold speedup of the quantum processor relative to a single core.

(An earlier version of this post misstated the numbers involved.)

(3) The asymptotic hardness of spoofing Google’s benchmark.

OK, but if Google thought that spoofing its test would take 10,000 years, using the best known classical algorithms running on the world’s top supercomputers, and it turns out instead that it could probably be done in more like 2.5 days, then how much else could’ve been missed? Will we find out next that Google’s benchmark can be classically spoofed in mere milliseconds?

Well, no one can rule that out, but we do have some reasons to think that it’s unlikely—and crucially, that even if it turned out to be true, one would just have to add 10 or 20 or 30 more qubits to make it no longer true. (We can’t be more definitive than that? Aye, such are the perils of life at a technological inflection point—and of computational complexity itself.)

The key point to understand here is that we really are talking about simulating a random quantum circuit, with no particular structure whatsoever. While such problems might have a theoretically efficient classical algorithm—i.e., one that runs in time polynomial in the number of qubits—I’d personally be much less surprised if you told me there was a polynomial-time classical algorithm for factoring. In the universe where amplitudes of random quantum circuits turn out to be efficiently computable—well, you might as well just tell me that P=PSPACE and be done with it.

Crucially, if you look at IBM’s approach to simulating quantum circuits classically, and Johnnie Gray’s approach, and Google’s approach, they could all be described as different flavors of “brute force.” That is, they all use extremely clever tricks to parallelize, shave off constant factors, make the best use of available memory, etc., but none involves any deep new mathematical insight that could roust BPP and BQP and the other complexity gods from their heavenly slumber. More concretely, none of these approaches seem to have any hope of “breaching the 2n barrier,” where n is the number of qubits in the quantum circuit to be simulated (assuming that the circuit depth is reasonably large). Mostly, they’re just trying to get down to that barrier, while taking the maximum advantage of whatever storage and connectivity and parallelism are there.

Ah, but at the end of the day, we only believe that Google’s Sycamore chip is solving a classically hard problem because of the statistical test that Google applies to its outputs: the so-called “Linear Cross-Entropy Benchmark,” which I described in Q3 of my FAQ. And even if we grant that calculating the output probabilities for a random quantum circuit is almost certainly classically hard, and sampling the output distribution of a random quantum circuit is almost certainly classically hard—still, couldn’t spoofing Google’s benchmark be classically easy?

This last question is where complexity theory can contribute something to the story. A couple weeks ago, UT undergraduate Sam Gunn and I adapted the hardness analysis from my and Lijie Chen’s 2017 paper “Complexity-Theoretic Foundations of Quantum Supremacy Experiments,” to talk directly about the classical hardness of spoofing the Linear Cross-Entropy benchmark. Our short paper about this should be on the arXiv later this week (or early next week, given that there are no arXiv updates on Friday or Saturday nights) here it is.

Briefly, Sam and I show that if you had a sub-2n classical algorithm to spoof the Linear Cross-Entropy benchmark, then you’d also have a sub-2n classical algorithm that, given as input a random quantum circuit, could estimate a specific output probability (for example, that of the all-0 string) with variance at least slightly (say, Ω(2-3n)) better than that of the trivial estimator that just always guesses 2-n. Or in other words: we show that spoofing Google’s benchmark is no easier than the general problem of nontrivially estimating amplitudes in random quantum circuits. Furthermore, this result automatically generalizes to the case of noisy circuits: all that the noise affects is the threshold for the Linear Cross-Entropy benchmark, and thus (indirectly) the number of samples one needs to take with the QC. Our result helps to explain why, indeed, neither IBM nor Johnnie Gray nor anyone else suggested any attack that’s specific to Google’s Linear Cross-Entropy benchmark: they all simply attack the general problem of calculating the final amplitudes.

(4) Why use Linear Cross-Entropy at all?

In the comments of my FAQ, some people wondered why Google chose the Linear Cross-Entropy benchmark specifically—especially since they’d used a different benchmark (multiplicative cross-entropy, which unlike the linear version actually is a cross-entropy) in their earlier papers. I asked John Martinis this question, and his answer was simply that linear cross-entropy had the lowest variance of any estimator they tried. Since I also like linear cross-entropy—it turns out, for example, to be convenient for the analysis of my certified randomness protocol—I’m 100% happy with their choice. Having said that, there are many other choices of benchmark that would’ve also worked fine, and with roughly the same level of theoretical justification.

(5) Controlled-Z versus iSWAP gates.

Another interesting detail from the Google paper is that, in their previous hardware, they could implement a particular 2-qubit gate called the Controlled-Z. For their quantum supremacy demonstration, on the other hand, they modified their hardware to implement a different 2-qubit gate called the iSWAP some weird combination of iSWAP and Controlled-Z; see the comments section for more. Now, this other gate has no known advantages over the Controlled-Z, for any applications like quantum simulation or Shor’s algorithm or Grover search. Why then did Google make the switch? Simply because, with certain classical simulation methods that they’d been considering, the simulation’s running time grows like 4 to the power of the number of these other gates, but only like 2 to the power of the number of Controlled-Z gates! In other words, they made this engineering choice purely and entirely to make a classical simulation of their device sweat more. This seems totally fine and entirely within the rules to me. (Alas, this choice has no effect on a proposed simulation method like IBM’s.)

(6) Gil Kalai’s objections.

Over the past month, Shtetl-Optimized regular and noted quantum computing skeptic Gil Kalai has been posting one objection to the Google experiment after another on his blog. Unlike the IBM group and many of Google’s other critics, Gil completely accepts the centrality of quantum supremacy as a goal. Indeed, he’s firmly predicted for years that quantum supremacy could never be achieved for fundamental reasons—and he agrees that the Google result, if upheld, would refute his worldview. Gil also has no dispute with the exponential classical hardness of the problem that Google is solving.

Instead, Gil—if we’re talking not about “steelmanning” his beliefs, but about what he himself actually said—has taken the position that the Google experiment must’ve been done wrong and will need to be retracted. He’s offered varying grounds for this. First he said that Google never computed the full histogram of probabilities with a smaller number of qubits (for which such an experiment is feasible), which would be an important sanity check. Except, it turns out they did do that, and it’s in their 2018 Science paper. Next he said that the experiment is invalid because the qubits have to be calibrated in a way that depends on the specific circuit to be applied. Except, this too turns out to be false: John Martinis explicitly confirmed for me that once the qubits are calibrated, you can run any circuit on them that you want. In summary, unlike the objections of the IBM group, so far I’ve found Gil’s objections to be devoid of scientific interest or merit.

Update #1: Alas, I’ll have limited availability today for answering comments, since we’ll be grading the midterm exam for my Intro to Quantum Information Science course! I’ll try to handle the backlog tomorrow (Thursday).

Update #2: Aaannd … timed to coincide with the Google paper, last night the group of Jianwei Pan and Chaoyang Lu put up a preprint on the arXiv reporting a BosonSampling experiment with 20 photons 14 photons observed out of 20 generated (the previous record had been 6 photons). At this stage of the quantum supremacy race, many had of course written off BosonSampling—or said that its importance was mostly historical, in that it inspired Google’s random circuit sampling effort.  I’m thrilled to see BosonSampling itself take such a leap; hopefully, this will eventually lead to a demonstration that BosonSampling was (is) a viable pathway to quantum supremacy as well.  And right now, with fault-tolerance still having been demonstrated in zero platforms, we need all the viable pathways we can get.  What an exciting day for the field.

### Scott’s Supreme Quantum Supremacy FAQ!

Monday, September 23rd, 2019

You’ve seen the stories—in the Financial Times, Technology Review, CNET, Facebook, Reddit, Twitter, or elsewhere—saying that a group at Google has now achieved quantum computational supremacy with a 53-qubit superconducting device. While these stories are easy to find, I’m not going to link to them here, for the simple reason that none of them were supposed to exist yet.

As the world now knows, Google is indeed preparing a big announcement about quantum supremacy, to coincide with the publication of its research paper in a high-profile journal (which journal? you can probably narrow it down to two). This will hopefully happen within a month.

Meanwhile, though, NASA, which has some contributors to the work, inadvertently posted an outdated version of the Google paper on a public website. It was there only briefly, but long enough to make it to the Financial Times, my inbox, and millions of other places. Fact-free pontificating about what it means has predictably proliferated.

The world, it seems, is going to be denied its clean “moon landing” moment, wherein the Extended Church-Turing Thesis gets experimentally obliterated within the space of a press conference. This is going to be more like the Wright Brothers’ flight—about which rumors and half-truths leaked out in dribs and drabs between 1903 and 1908, the year Will and Orville finally agreed to do public demonstration flights. (This time around, though, it thankfully won’t take that long to clear everything up!)

I’ve known about what was in the works for a couple months now; it was excruciating not being able to blog about it. Though sworn to secrecy, I couldn’t resist dropping some hints here and there (did you catch any?)—for example, in my recent Bernays Lectures in Zürich, a lecture series whose entire structure built up to the brink of this moment.

This post is not an official announcement or confirmation of anything. Though the lightning may already be visible, the thunder belongs to the group at Google, at a time and place of its choosing.

Rather, because so much misinformation is swirling around, what I thought I’d do here, in my role as blogger and “public intellectual,” is offer Scott’s Supreme Quantum Supremacy FAQ. You know, just in case you were randomly curious about the topic of quantum supremacy, or wanted to know what the implications would be if some search engine company based in Mountain View or wherever were hypothetically to claim to have achieved quantum supremacy.

Q1. What is quantum computational supremacy?

Often abbreviated to just “quantum supremacy,” the term refers to the use of a quantum computer to solve some well-defined set of problems that would take orders of magnitude longer to solve with any currently known algorithms running on existing classical computers—and not for incidental reasons, but for reasons of asymptotic quantum complexity. The emphasis here is on being as sure as possible that the problem really was solved quantumly and really is classically intractable, and ideally achieving the speedup soon (with the noisy, non-universal QCs of the present or very near future). If the problem is also useful for something, then so much the better, but that’s not at all necessary. The Wright Flyer and the Fermi pile weren’t useful in themselves.

Q2. If Google has indeed achieved quantum supremacy, does that mean that now “no code is uncrackable”, as Democratic presidential candidate Andrew Yang recently tweeted?

No, it doesn’t. (But I still like Yang’s candidacy.)

There are two issues here. First, the devices currently being built by Google, IBM, and others have 50-100 qubits and no error-correction. Running Shor’s algorithm to break the RSA cryptosystem would require several thousand logical qubits. With known error-correction methods, that could easily translate into millions of physical qubits, and those probably of a higher quality than any that exist today. I don’t think anyone is close to that, and we have no idea how long it will take.

But the second issue is that, even in a hypothetical future with scalable, error-corrected QCs, on our current understanding they’ll only be able to crack some codes, not all of them. By an unfortunate coincidence, the public-key codes that they can crack include most of what we currently use to secure the Internet: RSA, Diffie-Hellman, elliptic curve crypto, etc. But symmetric-key crypto should only be minimally affected. And there are even candidates for public-key cryptosystems (for example, based on lattices) that no one knows how to break quantumly after 20+ years of trying, and some efforts underway now to start migrating to those systems. For more, see for example my letter to Rebecca Goldstein.

Q3. What calculation is Google planning to do, or has it already done, that’s believed to be classically hard?

So, I can tell you, but I’ll feel slightly sheepish doing so. The calculation is: a “challenger” generates a random quantum circuit C (i.e., a random sequence of 1-qubit and nearest-neighbor 2-qubit gates, of depth perhaps 20, acting on a 2D grid of n = 50 to 60 qubits). The challenger then sends C to the quantum computer, and asks it apply C to the all-0 initial state, measure the result in the {0,1} basis, send back whatever n-bit string was observed, and repeat some thousands or millions of times. Finally, using its knowledge of C, the classical challenger applies a statistical test to check whether the outputs are consistent with the QC having done this.

So, this is not a problem like factoring with a single right answer. The circuit C gives rise to some probability distribution, call it DC, over n-bit strings, and the problem is to output samples from that distribution. In fact, there will typically be 2n strings in the support of DC—so many that, if the QC is working as expected, the same output will never be observed twice. A crucial point, though, is that the distribution DC is not uniform. Some strings enjoy constructive interference of amplitudes and therefore have larger probabilities, while others suffer destructive interference and have smaller probabilities. And even though we’ll only see a number of samples that’s tiny compared to 2n, we can check whether the samples preferentially cluster among the strings that are predicted to be likelier, and thereby build up our confidence that something classically intractable is being done.

So, tl;dr, the quantum computer is simply asked to apply a random (but known) sequence of quantum operations—not because we intrinsically care about the result, but because we’re trying to prove that it can beat a classical computer at some well-defined task.

Q4. But if the quantum computer is just executing some random garbage circuit, whose only purpose is to be hard to simulate classically, then who cares? Isn’t this a big overhyped nothingburger?

No. As I put it the other day, it’s not an everythingburger, but it’s certainly at least a somethingburger!

It’s like, have a little respect for the immensity of what we’re talking about here, and for the terrifying engineering that’s needed to make it reality. Before quantum supremacy, by definition, the QC skeptics can all laugh to each other that, for all the billions of dollars spent over 20+ years, still no quantum computer has even once been used to solve any problem faster than your laptop could solve it, or at least not in any way that depended on its being a quantum computer. In a post-quantum-supremacy world, that’s no longer the case. A superposition involving 250 or 260 complex numbers has been computationally harnessed, using time and space resources that are minuscule compared to 250 or 260.

I keep bringing up the Wright Flyer only because the chasm between what we’re talking about, and the dismissiveness I’m seeing in some corners of the Internet, is kind of breathtaking to me. It’s like, if you believed that useful air travel was fundamentally impossible, then seeing a dinky wooden propeller plane keep itself aloft wouldn’t refute your belief … but it sure as hell shouldn’t reassure you either.

Was I right to worry, years ago, that the constant drumbeat of hype about much less significant QC milestones would wear out people’s patience, so that they’d no longer care when something newsworthy finally did happen?

Q5. Years ago, you scolded the masses for being super-excited about D-Wave, and its claims to get huge quantum speedups for optimization problems via quantum annealing. Today you scold the masses for not being super-excited about quantum supremacy. Why can’t you stay consistent?

Because my goal is not to move the “excitement level” in some uniformly preferred direction, it’s to be right! With hindsight, would you say that I was mostly right about D-Wave, even when raining on that particular parade made me unpopular in some circles? Well, I’m trying to be right about quantum supremacy too.

Q6. If quantum supremacy calculations just involve sampling from probability distributions, how do you check that they were done correctly?

Glad you asked! This is the subject of a fair amount of theory that I and others developed over the last decade. I already gave you the short version in my answer to Q3: you check by doing statistics on the samples that the QC returned, to verify that they’re preferentially clustered in the “peaks” of the chaotic probability distribution DC. One convenient way of doing this, which Google calls the “linear cross-entropy test,” is simply to sum up Pr[C outputs si] over all the samples s1,…,sk that the QC returned, and then to declare the test a “success” if and only if the sum exceeds some threshold—say, bk/2n, for some constant b strictly between 1 and 2.

Admittedly, in order to apply this test, you need to calculate the probabilities Pr[C outputs si] on your classical computer—and the only known ways to calculate them require brute force and take ~2n time. Is that a showstopper? No, not if n is 50, and you’re Google and are able to handle numbers like 250 (although not 21000, which exceeds a googol, har har). By running a huge cluster of classical cores for (say) a month, you can eventually verify the outputs that your QC produced in a few seconds—while also seeing that the QC was many orders of magnitude faster. However, this does mean that sampling-based quantum supremacy experiments are almost specifically designed for ~50-qubit devices like the ones being built right now. Even with 100 qubits, we wouldn’t know how to verify the results using all the classical computing power available on earth.

(Let me stress that this issue is specific to sampling experiments like the ones that are currently being done. If Shor’s algorithm factored a 2000-digit number, it would be easy to check the result by simply multiplying the claimed factors and running a primality test on them. Likewise, if a QC were used to simulate some complicated biomolecule, you could check its results by comparing them to experiment.)

Q7. Wait. If classical computers can only check the results of a quantum supremacy experiment, in a regime where the classical computers can still simulate the experiment (albeit extremely slowly), then how do you get to claim “quantum supremacy”?

Come on. With a 53-qubit chip, it’s perfectly feasible to see a speedup by a factor of many millions, in a regime where you can still directly verify the outputs, and also to see that the speedup is growing exponentially with the number of qubits, exactly as asymptotic analysis would predict. This isn’t marginal.

Q8. Is there a mathematical proof that no fast classical algorithm could possibly spoof the results of a sampling-based quantum supremacy experiment?

Not at present. But that’s not quantum supremacy researchers’ fault! As long as theoretical computer scientists can’t even prove basic conjectures like P≠NP or P≠PSPACE, there’s no hope of ruling out a fast classical simulation unconditionally. The best we can hope for are conditional hardness results. And we have indeed managed to prove some such results—see for example the BosonSampling paper, or the Bouland et al. paper on average-case #P-hardness of calculating amplitudes in random circuits, or my paper with Lijie Chen (“Complexity-Theoretic Foundations of Quantum Supremacy Experiments”). The biggest theoretical open problem in this area, in my opinion, is to prove better conditional hardness results.

Q9. Does sampling-based quantum supremacy have any applications in itself?

When people were first thinking about this subject, it seemed pretty obvious that the answer was “no”! (I know because I was one of the people.) Recently, however, the situation has changed—for example, because of my certified randomness protocol, which shows how a sampling-based quantum supremacy experiment could almost immediately be repurposed to generate bits that can be proven to be random to a skeptical third party (under computational assumptions). This, in turn, has possible applications to proof-of-stake cryptocurrencies and other cryptographic protocols. I’m hopeful that more such applications will be discovered in the near future.

Q10. If the quantum supremacy experiments are just generating random bits, isn’t that uninteresting? Isn’t it trivial to convert qubits into random bits, just by measuring them?

The key is that a quantum supremacy experiment doesn’t generate uniform random bits. Instead, it samples from some complicated, correlated probability distribution over 50- or 60-bit strings. In my certified randomness protocol, the deviations from uniformity play a central role in how the QC convinces a classical skeptic that it really was sampling the bits randomly, rather than in some secretly deterministic way (e.g., using a pseudorandom generator).

Q11. Haven’t decades of quantum-mechanical experiments–for example, the ones that violated the Bell inequality–already demonstrated quantum supremacy?

This is purely a confusion over words. Those other experiments demonstrated other forms of “quantum supremacy”: for example, in the case of Bell inequality violations, what you could call “quantum correlational supremacy.” They did not demonstrate quantum computational supremacy, meaning doing something that’s infeasible to simulate using a classical computer (where the classical simulation has no restrictions of spatial locality or anything else of that kind). Today, when people use the phrase “quantum supremacy,” it’s generally short for quantum computational supremacy.

Q12. Even so, there are countless examples of materials and chemical reactions that are hard to classically simulate, as well as special-purpose quantum simulators (like those of Lukin’s group at Harvard). Why don’t these already count as quantum computational supremacy?

Under some people’s definitions of “quantum computational supremacy,” they do! The key difference with Google’s effort is that they have a fully programmable device—one that you can program with an arbitrary sequence of nearest-neighbor 2-qubit gates, just by sending the appropriate signals from your classical computer.

In other words, it’s no longer open to the QC skeptics to sneer that, sure, there are quantum systems that are hard to simulate classically, but that’s just because nature is hard to simulate, and you don’t get to arbitrarily redefine whatever random chemical you find in the wild to be a “computer for simulating itself.” Under any sane definition, the superconducting devices that Google, IBM, and others are now building are indeed “computers.”

Q13. Did you (Scott Aaronson) invent the concept of quantum supremacy?

No. I did play some role in developing it, which led to Sabine Hossenfelder among others generously overcrediting me for the whole idea. The term “quantum supremacy” was coined by John Preskill in 2012, though in some sense the core concept goes back to the beginnings of quantum computing itself in the early 1980s. In 1993, Bernstein and Vazirani explicitly pointed out the severe apparent tension between quantum mechanics and the Extended Church-Turing Thesis of classical computer science. Then, in 1994, the use of Shor’s algorithm to factor a huge number became the quantum supremacy experiment par excellence—albeit, one that’s still (in 2019) much too hard to perform.

The key idea of instead demonstrating quantum supremacy using a sampling problem was, as far as I know, first suggested by Barbara Terhal and David DiVincenzo, in a farsighted paper from 2002. The “modern” push for sampling-based supremacy experiments started around 2011, when Alex Arkhipov and I published our paper on BosonSampling, and (independently of us) Bremner, Jozsa, and Shepherd published their paper on the commuting Hamiltonians model. These papers showed, not only that “simple,” non-universal quantum systems can solve apparently-hard sampling problems, but also that an efficient classical algorithm for the same sampling problems would imply a collapse of the polynomial hierarchy. Arkhipov and I also made a start toward arguing that even the approximate versions of quantum sampling problems can be classically hard.

As far as I know, the idea of “Random Circuit Sampling”—that is, generating your hard sampling problem by just picking a random sequence of 2-qubit gates in (say) a superconducting architecture—originated in an email thread that I started in December 2015, which also included John Martinis, Hartmut Neven, Sergio Boixo, Ashley Montanaro, Michael Bremner, Richard Jozsa, Aram Harrow, Greg Kuperberg, and others. The thread was entitled “Hard sampling problems with 40 qubits,” and my email began “Sorry for the spam.” I then discussed some advantages and disadvantages of three options for demonstrating sampling-based quantum supremacy: (1) random circuits, (2) commuting Hamiltonians, and (3) BosonSampling. After Greg Kuperberg chimed in to support option (1), a consensus quickly formed among the participants that (1) was indeed the best option from an engineering standpoint—and that, if the theoretical analysis wasn’t yet satisfactory for (1), then that was something we could remedy.

[Update: Sergio Boixo tells me that, internally, the Google group had been considering the idea of random circuit sampling since February 2015, even before my email thread. This doesn’t surprise me: while there are lots of details that had to be worked out, the idea itself is an extremely natural one.]

After that, the Google group did a huge amount of analysis of random circuit sampling, both theoretical and numerical, while Lijie Chen and I and Bouland et al. supplied different forms of complexity-theoretic evidence for the problem’s classical hardness.

Q14. If quantum supremacy was achieved, what would it mean for the QC skeptics?

I wouldn’t want to be them right now! They could retreat to the position that of course quantum supremacy is possible (who ever claimed that it wasn’t? surely not them!), that the real issue has always been quantum error-correction. And indeed, some of them have consistently maintained that position all along. But others, including my good friend Gil Kalai, are on record, right here on this blog predicting that even quantum supremacy can never be achieved for fundamental reasons. I won’t let them wiggle out of it now.

[Update: As many of you will have seen, Gil Kalai has taken the position that the Google result won’t stand and will need to be retracted. He asked for more data: specifically, a complete histogram of the output probabilities for a smaller number of qubits. This turns out to be already available, in a Science paper from 2018.]

Q15. What’s next?

If it’s achieved quantum supremacy, then I think the Google group already has the requisite hardware to demonstrate my protocol for generating certified random bits. And that’s indeed one of the very next things they’re planning to do.

[Addendum: Also, of course, the evidence for quantum supremacy itself can be made stronger and various loopholes closed—for example, by improving the fidelity so that fewer samples need to be taken (something that Umesh Vazirani tells me he’d like to see), by having the circuit C be generated and the outputs verified by a skeptic external to Google. and simply by letting more time pass, so outsiders can have a crack at simulating the results classically. My personal guess is that the basic picture is going to stand, but just like with the first experiments that claimed to violate the Bell inequality, there’s still plenty of room to force the skeptics into a tinier corner.]

Beyond that, one obvious next milestone would be to use a programmable QC, with (say) 50-100 qubits, to do some useful quantum simulation (say, of a condensed-matter system) much faster than any known classical method could do it. A second obvious milestone would be to demonstrate the use of quantum error-correction, to keep an encoded qubit alive for longer than the underlying physical qubits remain alive. There’s no doubt that Google, IBM, and the other players will now be racing toward both of these milestones.

[Update: Steve Girvin reminds me that the Yale group has already achieved quantum error-correction “beyond the break-even point,” albeit in a bosonic system rather than superconducting qubits. So perhaps a better way to phrase the next milestone would be: achieve quantum computational supremacy and useful quantum error-correction in the same system.]

Another update: I thought this IEEE Spectrum piece gave a really nice overview of the issues.

Last update: John Preskill’s Quanta column about quantum supremacy is predictably excellent (and possibly a bit more accessible than this FAQ).

### Here’s some video of me spouting about Deep Questions

Thursday, February 4th, 2016

In January 2014, I attended an FQXi conference on Vieques island in Puerto Rico.  While there, Robert Lawrence Kuhn interviewed me for his TV program Closer to Truth, which deals with science and religion and philosophy and you get the idea.  Alas, my interview was at the very end of the conference, and we lost track of the time—so unbeknownst to me, a plane full of theorists was literally sitting on the runway waiting for me to finish philosophizing!  This was the second time Kuhn interviewed me for his show; the first time was on a cruise ship near Norway in 2011.  (Thankless hero that I am, there’s nowhere I won’t travel for the sake of truth.)

Anyway, after a two-year wait, the videos from Puerto Rico are finally available online.  While my vignettes cover what, for most readers of this blog, will be very basic stuff, I’m sort of happy with how they turned out: I still stutter and rock back and forth, but not as much as usual.  For your viewing convenience, here are the new videos:

I had one other vignette, about why the universe exists, but they seem to have cut that one.  Alas, if I knew why the universe existed in January 2014, I can’t remember any more.

One embarrassing goof: I referred to the inventor of Newcomb’s Paradox as “Simon Newcomb.”  Actually it was William Newcomb: a distant relative of Simon Newcomb, the 19th-century astronomer who measured the speed of light.

At their website, you can also see my older 2011 videos, and videos from others who might be known to readers of this blog, like Marvin Minsky, Roger Penrose, Rebecca Newberger Goldstein, David ChalmersSean Carroll, Max Tegmark, David Deutsch, Raphael Bousso, Freeman DysonNick BostromRay Kurzweil, Rodney Brooks, Stephen Wolfram, Greg Chaitin, Garrett Lisi, Seth Lloyd, Lenny Susskind, Lee Smolin, Steven Weinberg, Wojciech Zurek, Fotini Markopoulou, Juan Maldacena, Don Page, and David Albert.  (No, I haven’t yet watched most of these, but now that I linked to them, maybe I will!)

Thanks very much to Robert Lawrence Kuhn and Closer to Truth (and my previous self, I guess?) for providing Shtetl-Optimized content so I don’t have to.

Update: Andrew Critch of CFAR asked me to post the following announcement.

We’re seeking a full time salesperson for the Center for Applied Rationality in Berkeley, California. We’ve streamlined operations to handle large volume in workshop admissions, and now we need that volume to pour in. Your role would be to fill our workshops, events, and alumni community with people. Last year we had 167 total new alumni. This year we want 120 per month. Click here to find out more.

### Bell inequality violation finally done right

Tuesday, September 15th, 2015

A few weeks ago, Hensen et al., of the Delft University of Technology and Barcelona, Spain, put out a paper reporting the first experiment that violates the Bell inequality in a way that closes off the two main loopholes simultaneously: the locality and detection loopholes.  Well, at least with ~96% confidence.  This is big news, not only because of the result itself, but because of the advances in experimental technique needed to achieve it.  Last Friday, two renowned experimentalists—Chris Monroe of U. of Maryland and Jungsang Kim of Duke—visited MIT, and in addition to talking about their own exciting ion-trap work, they did a huge amount to help me understand the new Bell test experiment.  So OK, let me try to explain this.

While some people like to make it more complicated, the Bell inequality is the following statement. Alice and Bob are cooperating with each other to win a certain game (the “CHSH game“) with the highest possible probability. They can agree on a strategy and share information and particles in advance, but then they can’t communicate once the game starts. Alice gets a uniform random bit x, and Bob gets a uniform random bit y (independent of x).  Their goal is to output bits, a and b respectively, such that a XOR b = x AND y: in other words, such that a and b are different if and only if x and y are both 1.  The Bell inequality says that, in any universe that satisfies the property of local realism, no matter which strategy they use, Alice and Bob can win the game at most 75% of the time (for example, by always outputting a=b=0).

What does local realism mean?  It means that, after she receives her input x, any experiment Alice can perform in her lab has a definite result that might depend on x, on the state of her lab, and on whatever information she pre-shared with Bob, but at any rate, not on Bob’s input y.  If you like: a=a(x,w) is a function of x and of the information w available before the game started, but is not a function of y.  Likewise, b=b(y,w) is a function of y and w, but not of x.  Perhaps the best way to explain local realism is that it’s the thing you believe in, if you believe all the physicists babbling about “quantum entanglement” just missed something completely obvious.  Clearly, at the moment two “entangled” particles are created, but before they separate, one of them flips a tiny coin and then says to the other, “listen, if anyone asks, I’ll be spinning up and you’ll be spinning down.”  Then the naïve, doofus physicists measure one particle, find it spinning down, and wonder how the other particle instantly “knows” to be spinning up—oooh, spooky! mysterious!  Anyway, if that’s how you think it has to work, then you believe in local realism, and you must predict that Alice and Bob can win the CHSH game with probability at most 3/4.

What Bell observed in 1964 is that, even though quantum mechanics doesn’t let Alice send a signal to Bob (or vice versa) faster than the speed of light, it still makes a prediction about the CHSH game that conflicts with local realism.  (And thus, quantum mechanics exhibits what one might not have realized beforehand was even a logical possibility: it doesn’t allow communication faster than light, but simulating the predictions of quantum mechanics in a classical universe would require faster-than-light communication.)  In particular, if Alice and Bob share entangled qubits, say $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}},$$ then there’s a simple protocol that lets them violate the Bell inequality, winning the CHSH game ~85% of the time (with probability (1+1/√2)/2 > 3/4).  Starting in the 1970s, people did experiments that vindicated the prediction of quantum mechanics, and falsified local realism—or so the story goes.

The violation of the Bell inequality has a schizophrenic status in physics.  To many of the physicists I know, Nature’s violating the Bell inequality is so trivial and obvious that it’s barely even worth doing the experiment: if people had just understood and believed Bohr and Heisenberg back in 1925, there would’ve been no need for this whole tiresome discussion.  To others, however, the Bell inequality violation remains so unacceptable that some way must be found around it—from casting doubt on the experiments that have been done, to overthrowing basic presuppositions of science (e.g., our own “freedom” to generate random bits x and y to send to Alice and Bob respectively).

For several decades, there was a relatively conservative way out for local realist diehards, and that was to point to “loopholes”: imperfections in the existing experiments which meant that local realism was still theoretically compatible with the results, at least if one was willing to assume a sufficiently strange conspiracy.

Fine, you interject, but surely no one literally believed these little experimental imperfections would be the thing that would rescue local realism?  Not so fast.  Right here, on this blog, I’ve had people point to the loopholes as a reason to accept local realism and reject the reality of quantum entanglement.  See, for example, the numerous comments by Teresa Mendes in my Whether Or Not God Plays Dice, I Do post.  Arguing with Mendes back in 2012, I predicted that the two main loopholes would both be closed in a single experiment—and not merely eventually, but in, like, a decade.  I was wrong: achieving this milestone took only a few years.

Before going further, let’s understand what the two main loopholes are (or rather, were).

The locality loophole arises because the measuring process takes time and Alice and Bob are not infinitely far apart.  Thus, suppose that, the instant Alice starts measuring her particle, a secret signal starts flying toward Bob’s particle at the speed of light, revealing her choice of measurement setting (i.e., the value of x).  Likewise, the instant Bob starts measuring his particle, his doing so sends a secret signal flying toward Alice’s particle, revealing the value of y.  By the time the measurements are finished, a few microseconds later, there’s been plenty of time for the two particles to coordinate their responses to the measurements, despite being “classical under the hood.”

Meanwhile, the detection loophole arises because in practice, measurements of entangled particles—especially of photons—don’t always succeed in finding the particles, let alone ascertaining their properties.  So one needs to select those runs of the experiment where Alice and Bob both find the particles, and discard all the “bad” runs where they don’t.  This by itself wouldn’t be a problem, if not for the fact that the very same measurement that reveals whether the particles are there, is also the one that “counts” (i.e., where Alice and Bob feed x and y and get out a and b)!

To someone with a conspiratorial mind, this opens up the possibility that the measurement’s success or failure is somehow correlated with its result, in a way that could violate the Bell inequality despite there being no real entanglement.  To illustrate, suppose that at the instant they’re created, one entangled particle says to the other: “listen, if Alice measures me in the x=0 basis, I’ll give the a=1 result.  If Bob measures you in the y=1 basis, you give the b=1 result.  In any other case, we’ll just evade detection and count this run as a loss.”  In such a case, Alice and Bob will win the game with certainty, whenever it gets played at all—but that’s only because of the particles’ freedom to choose which rounds will count.  Indeed, by randomly varying their “acceptable” x and y values from one round to the next, the particles can even make it look like x and y have no effect on the probability of a round’s succeeding.

Until a month ago, the state-of-the-art was that there were experiments that closed the locality loophole, and other experiments that closed the detection loophole, but there was no single experiment that closed both of them.

To close the locality loophole, “all you need” is a fast enough measurement on photons that are far enough apart.  That way, even if the vast Einsteinian conspiracy is trying to send signals between Alice’s and Bob’s particles at the speed of light, to coordinate the answers classically, the whole experiment will be done before the signals can possibly have reached their destinations.  Admittedly, as Nicolas Gisin once pointed out to me, there’s a philosophical difficulty in defining what we mean by the experiment being “done.”  To some purists, a Bell experiment might only be “done” once the results (i.e., the values of a and b) are registered in human experimenters’ brains!  And given the slowness of human reaction times, this might imply that a real Bell experiment ought to be carried out with astronauts on faraway space stations, or with Alice on the moon and Bob on earth (which, OK, would be cool).  If we’re being reasonable, however, we can grant that the experiment is “done” once a and b are safely recorded in classical, macroscopic computer memories—in which case, given the speed of modern computer memories, separating Alice and Bob by half a kilometer can be enough.  And indeed, experiments starting in 1998 (see for example here) have done exactly that; the current record, unless I’m mistaken, is 18 kilometers.  (Update: I was mistaken; it’s 144 kilometers.)  Alas, since these experiments used hard-to-measure photons, they were still open to the detection loophole.

To close the detection loophole, the simplest approach is to use entangled qubits that (unlike photons) are slow and heavy and can be measured with success probability approaching 1.  That’s exactly what various groups did starting in 2001 (see for example here), with trapped ions, superconducting qubits, and other systems.  Alas, given current technology, these sorts of qubits are virtually impossible to move miles apart from each other without decohering them.  So the experiments used qubits that were close together, leaving the locality loophole wide open.

So the problem boils down to: how do you create long-lasting, reliably-measurable entanglement between particles that are very far apart (e.g., in separate labs)?  There are three basic ideas in Hensen et al.’s solution to this problem.

The first idea is to use a hybrid system.  Ultimately, Hensen et al. create entanglement between electron spins in nitrogen vacancy centers in diamond (one of the hottest—or coolest?—experimental quantum information platforms today), in two labs that are about a mile away from each other.  To get these faraway electron spins to talk to each other, they make them communicate via photons.  If you stimulate an electron, it’ll sometimes emit a photon with which it’s entangled.  Very occasionally, the two electrons you care about will even emit photons at the same time.  In those cases, by routing those photons into optical fibers and then measuring the photons, it’s possible to entangle the electrons.

Wait, what?  How does measuring the photons entangle the electrons from whence they came?  This brings us to the second idea, entanglement swapping.  The latter is a famous procedure to create entanglement between two particles A and B that have never interacted, by “merely” entangling A with another particle A’, entangling B with another particle B’, and then performing an entangled measurement on A’ and B’ and conditioning on its result.  To illustrate, consider the state

$$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}} \otimes \frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}}$$

and now imagine that we project the first and third qubits onto the state $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}}.$$

If the measurement succeeds, you can check that we’ll be left with the state $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}}$$ in the second and fourth qubits, even though those qubits were not entangled before.

So to recap: these two electron spins, in labs a mile away from each other, both have some probability of producing a photon.  The photons, if produced, are routed to a third site, where if they’re both there, then an entangled measurement on both of them (and a conditioning on the results of that measurement) has some nonzero probability of causing the original electron spins to become entangled.

But there’s a problem: if you’ve been paying attention, all we’ve done is cause the electron spins to become entangled with some tiny, nonzero probability (something like 6.4×10-9 in the actual experiment).  So then, why is this any improvement over the previous experiments, which just directly measured faraway entangled photons, and also had some small but nonzero probability of detecting them?

This leads to the third idea.  The new setup is an improvement because, whenever the photon measurement succeeds, we know that the electron spins are there and that they’re entangled, without having to measure the electron spins to tell us that.  In other words, we’ve decoupled the measurement that tells us whether we succeeded in creating an entangled pair, from the measurement that uses the entangled pair to violate the Bell inequality.  And because of that decoupling, we can now just condition on the runs of the experiment where the entangled pair was there, without worrying that that will open up the detection loophole, biasing the results via some bizarre correlated conspiracy.  It’s as if the whole experiment were simply switched off, except for those rare lucky occasions when an entangled spin pair gets created (with its creation heralded by the photons).  On those rare occasions, Alice and Bob swing into action, measuring their respective spins within the brief window of time—about 4 microseconds—allowed by the locality loophole, seeking an additional morsel of evidence that entanglement is real.  (Well, actually, Alice and Bob swing into action regardless; they only find out later whether this was one of the runs that “counted.”)

So, those are the main ideas (as well as I understand them); then there’s lots of engineering.  In their setup, Hensen et al. were able to create just a few heralded entangled pairs per hour.  This allowed them to produce 245 CHSH games for Alice and Bob to play, and to reject the hypothesis of local realism at ~96% confidence.  Jungsang Kim explained to me that existing technologies could have produced many more events per hour, and hence, in a similar amount of time, “particle physics” (5σ or more) rather than “psychology” (2σ) levels of confidence that local realism is false.  But in this type of experiment, everything is a tradeoff.  Building not one but two labs for manipulating NV centers in diamond is extremely onerous, and Hensen et al. did what they had to do to get a significant result.

The basic idea here, of using photons to entangle longer-lasting qubits, is useful for more than pulverizing local realism.  In particular, the idea is a major part of current proposals for how to build a scalable ion-trap quantum computer.  Because of cross-talk, you can’t feasibly put more than 10 or so ions in the same trap while keeping all of them coherent and controllable.  So the current ideas for scaling up involve having lots of separate traps—but in that case, one will sometimes need to perform a Controlled-NOT, or some other 2-qubit gate, between a qubit in one trap and a qubit in another.  This can be achieved using the Gottesman-Chuang technique of gate teleportation, provided you have reliable entanglement between the traps.  But how do you create such entanglement?  Aha: the current idea is to entangle the ions by using photons as intermediaries, very similar in spirit to what Hensen et al. do.

At a more fundamental level, will this experiment finally convince everyone that local realism is dead, and that quantum mechanics might indeed be the operating system of reality?  Alas, I predict that those who confidently predicted that a loophole-free Bell test could never be done, will simply find some new way to wiggle out, without admitting the slightest problem for their previous view.  This prediction, you might say, is based on a different kind of realism.

### Randomness Rules in Quantum Mechanics

Monday, June 16th, 2014

So, Part II of my two-part series for American Scientist magazine about how to recognize random numbers is now out.  This part—whose original title was the one above, but was changed to “Quantum Randomness” to fit the allotted space—is all about quantum mechanics and the Bell inequality, and their use in generating “Einstein-certified random numbers.”  I discuss the CHSH game, the Free Will Theorem, and Gerard ‘t Hooft’s “superdeterminism” (just a bit), before explaining the striking recent protocols of Colbeck, Pironio et al., Vazirani and Vidick, Couldron and Yuen, and Miller and Shi, all of which expand a short random seed into additional random bits that are “guaranteed to be random unless Nature resorted to faster-than-light communication to bias them.”  I hope you like it.

[Update: See here for Hacker News thread]

In totally unrelated news, President Obama’s commencement speech at UC Irvine, about climate change and the people who still deny its reality, is worth reading.

### Collaborative Refutation

Monday, February 4th, 2013

At least eight people—journalists, colleagues, blog readers—have now asked my opinion of a recent paper by Ross Anderson and Robert Brady, entitled “Why quantum computing is hard and quantum cryptography is not provably secure.”  Where to begin?

1. Based on a “soliton” model—which seems to be almost a local-hidden-variable model, though not quite—the paper advances the prediction that quantum computation will never be possible with more than 3 or 4 qubits.  (Where “3 or 4” are not just convenient small numbers, but actually arise from the geometry of spacetime.)  I wonder: before uploading their paper, did the authors check whether their prediction was, y’know, already falsified?  How do they reconcile their proposal with (for example) the 8-qubit entanglement observed by Haffner et al. with trapped ions—not to mention the famous experiments with superconducting Josephson junctions, buckyballs, and so forth that have demonstrated the reality of entanglement among many thousands of particles (albeit not yet in a “controllable” form)?
2. The paper also predicts that, even with 3 qubits, general entanglement will only be possible if the qubits are not collinear; with 4 qubits, general entanglement will only be possible if the qubits are not coplanar.  Are the authors aware that, in ion-trap experiments (like those of David Wineland that recently won the Nobel Prize), the qubits generally are arranged in a line?  See for example this paper, whose abstract reads in part: “Here we experimentally demonstrate quantum error correction using three beryllium atomic-ion qubits confined to a linear, multi-zone trap.”
3. Finally, the paper argues that, because entanglement might not be a real phenomenon, the security of quantum key distribution remains an open question.  Again: are the authors aware that the most practical QKD schemes, like BB84, never use entanglement at all?  And that therefore, even if the paper’s quasi-local-hidden-variable model were viable (which it’s not), it still wouldn’t justify the claim in the title that “…quantum cryptography is not provably secure”?

Yeah, this paper is pretty uninformed even by the usual standards of attempted quantum-mechanics-overthrowings.  Let me now offer three more general thoughts.

First thought: it’s ironic that I’m increasingly seeing eye-to-eye with Lubos Motl—who once called me “the most corrupt piece of moral trash”—in his rantings against the world’s “anti-quantum-mechanical crackpots.”  Let me put it this way: David Deutsch, Chris Fuchs, Sheldon Goldstein, and Roger Penrose hold views about quantum mechanics that are diametrically opposed to one another’s.  Yet each of these very different physicists has earned my admiration, because each, in his own way, is trying to listen to whatever quantum mechanics is saying about how the world works.  However, there are also people all of whose “thoughts” about quantum mechanics are motivated by the urge to plug their ears and shut out whatever quantum mechanics is saying—to show how whatever naïve ideas they had before learning QM might still be right, and how all the experiments of the last century that seem to indicate otherwise might still be wiggled around.  Like monarchists or segregationists, these people have been consistently on the losing side of history for generations—so it’s surprising, to someone like me, that they continue to show up totally unfazed and itching for battle, like the knight from Monty Python and the Holy Grail with his arms and legs hacked off.  (“Bell’s Theorem?  Just a flesh wound!”)

Like any physical theory, of course quantum mechanics might someday be superseded by an even deeper theory.  If and when that happens, it will rank alongside Newton’s apple, Einstein’s elevator, and the discovery of QM itself among the great turning points in the history of physics.  But it’s crucial to understand that that’s not what we’re discussing here.  Here we’re discussing the possibility that quantum mechanics is wrong, not for some deep reason, but for a trivial reason that was somehow overlooked since the 1920s—that there’s some simple classical model that would make everyone exclaim,  “oh!  well, I guess that whole framework of exponentially-large Hilbert space was completely superfluous, then.  why did anyone ever imagine it was needed?”  And the probability of that is comparable to the probability that the Moon is made of Gruyère.  If you’re a Bayesian with a sane prior, stuff like this shouldn’t even register.

Second thought: this paper illustrates, better than any other I’ve seen, how despite appearances, the “quantum computing will clearly be practical in a few years!” camp and the “quantum computing is clearly impossible!” camp aren’t actually opposed to each other.  Instead, they’re simply two sides of the same coin.  Anderson and Brady start from the “puzzling” fact that, despite what they call “the investment of tremendous funding resources worldwide” over the last decade, quantum computing still hasn’t progressed beyond a few qubits, and propose to overthrow quantum mechanics as a way to resolve the puzzle.  To me, this is like arguing in 1835 that, since Charles Babbage still hasn’t succeeded in building a scalable classical computer, we need to rewrite the laws of physics in order to explain why classical computing is impossible.  I.e., it’s a form of argument that only makes sense if you’ve adopted what one might call the “Hype Axiom”: the axiom that any technology that’s possible sometime in the future, must in fact be possible within the next few years.

Third thought: it’s worth noting that, if (for example) you found Michel Dyakonov’s arguments against QC (discussed on this blog a month ago) persuasive, then you shouldn’t find Anderson’s and Brady’s persuasive, and vice versa.  Dyakonov agrees that scalable QC will never work, but he ridicules the idea that we’d need to modify quantum mechanics itself to explain why.  Anderson and Brady, by contrast, are so eager to modify QM that they don’t mind contradicting a mountain of existing experiments.  Indeed, the question occurs to me of whether there’s any pair of quantum computing skeptics whose arguments for why QC can’t work are compatible with one another’s.  (Maybe Alicki and Dyakonov?)

But enough of this.  The truth is that, at this point in my life, I find it infinitely more interesting to watch my two-week-old daughter Lily, as she discovers the wonderful world of shapes, colors, sounds, and smells, than to watch Anderson and Brady, as they fail to discover the wonderful world of many-particle quantum mechanics.  So I’m issuing an appeal to the quantum computing and information community.  Please, in the comments section of this post, explain what you thought of the Anderson-Brady paper.  Don’t leave me alone to respond to this stuff; I don’t have the time or the energy.  If you get quantum probability, then stand up and be measured!

### I was wrong about Joy Christian

Thursday, May 10th, 2012

Update: I decided to close comments on this post and the previous Joy Christian post, because they simply became too depressing for me.

I’ve further decided to impose a moratorium, on this blog, on all discussions about the validity of quantum mechanics in the microscopic realm, the reality of quantum entanglement, or the correctness of theorems such as Bell’s Theorem.  I might lift the moratorium at some future time.  For now, though, life simply feels too short to me, and the actually-interesting questions too numerous.  Imagine, for example, that there existed a devoted band of crackpots who believed, for complicated, impossible-to-pin-down reasons of topology and geometric algebra, that triangles actually have five corners.  These crackpots couldn’t be persuaded by rational argument—indeed, they didn’t even use words and sentences the same way you do, to convey definite meaning.  And crucially, they had infinite energy: you could argue with them for weeks, and they would happily argue back, until you finally threw up your hands in despair for all humanity, at which point the crackpots would gleefully declare, “haha, we won!  the silly ‘triangles have 3 corners’ establishment cabal has admitted defeat!”  And, in a sense, they would have won: with one or two exceptions, the vast majority who know full well how many corners a triangle has simply never showed up to the debate, thereby conceding to the 5-cornerists by default.

What would you in such a situation?  What would you do?  If you figure it out, please let me know (but by email, not by blog comment).

In response to my post criticizing his “disproof” of Bell’s Theorem, Joy Christian taunted me that “all I knew was words.”  By this, he meant that my criticisms were entirely based on circumstantial evidence, for example that (1) Joy clearly didn’t understand what the word “theorem” even meant, (2) every other sentence he uttered contained howling misconceptions, (3) his papers were written in an obscure, “crackpot” way, and (4) several people had written very clear papers pointing out mathematical errors in his work, to which Joy had responded only with bluster.  But I hadn’t actually studied Joy’s “work” at a technical level.  Well, yesterday I finally did, and I confess that I was astonished by what I found.  Before, I’d actually given Joy some tiny benefit of the doubt—possibly misled by the length and semi-respectful tone of the papers refuting his claims.  I had assumed that Joy’s errors, though ultimately trivial (how could they not be, when he’s claiming to contradict such a well-understood fact provable with a few lines of arithmetic?), would nevertheless be artfully concealed, and would require some expertise in geometric algebra to spot.  I’d also assumed that of course Joy would have some well-defined hidden-variable model that reproduced the quantum-mechanical predictions for the Bell/CHSH experiment (how could he not?), and that the “only” problem would be that, due to cleverly-hidden mistakes, his model would be subtly nonlocal.

What I actually found was a thousand times worse: closer to the stuff freshmen scrawl on an exam when they have no clue what they’re talking about but are hoping for a few pity points.  It’s so bad that I don’t understand how even Joy’s fellow crackpots haven’t laughed this off the stage.  Look, Joy has a hidden variable λ, which is either 1 or -1 uniformly at random.  He also has a measurement choice a of Alice, and a measurement choice b of Bob.  He then defines Alice and Bob’s measurement outcomes A and B via the following functions:

A(a,λ) = something complicated = (as Joy correctly observes) λ

B(b,λ) = something complicated = (as Joy correctly observes) -λ

I shit you not.  A(a,λ) = λ, and B(b,λ) = -λ.  Neither A nor B has any dependence on the choices of measurement a and b, and the complicated definitions that he gives for them turn out to be completely superfluous.  No matter what measurements are made, A and B are always perfectly anticorrelated with each other.

You might wonder: what could lead anyone—no matter how deluded—even to think such a thing could violate the Bell/CHSH inequalities?  Aha, Joy says you only ask such a naïve question because, lacking his deep topological insight, you make the rookie mistake of looking at the actual outcomes that his model actually predicts for the actual measurements that are actually made.  What you should do, instead, is compute a “correlation function” E(a,b) that’s defined by dividing A(a,λ)B(b,λ) by a “normalizing factor” that’s a product of the quaternions a and b, with a divided on the left and b divided on the right.  Joy seems to have obtained this “normalizing factor” via the technique of pulling it out of his rear end.  Now, as Gill shows, Joy actually makes an algebra mistake while computing his nonsensical “correlation function.”  The answer should be -a.b-a×b, not -a.b.  But that’s truthfully beside the point.  It’s as if someone announced his revolutionary discovery that P=NP implies N=1, and then critics soberly replied that, no, the equation P=NP can also be solved by P=0.

So, after 400+ comments on my previous thread—including heady speculations about M-theory, the topology of spacetime, the Copenhagen interpretation, continuity versus discreteness, etc., as well numerous comparisons to Einstein—this is what it boils down to.  A(a,λ) = λ and B(b,λ) = -λ.

I call on FQXi, in the strongest possible terms, to stop lending its legitimacy to this now completely-unmasked charlatan.  If it fails to do so, then I will resign from FQXi, and will encourage fellow FQXi members to do the same.

While I don’t know the exact nature of Joy’s relationship to Oxford University or to the Perimeter Institute, I also call on those institutions to sever any connections they still have with him.

Finally, with this post I’m going to try a new experiment.  I will allow comments through the moderation filter if, and only if, they exceed a minimum threshold of sanity and comprehensibility, and do not randomly throw around terms like “M-theory” with no apparent understanding of what they mean.  Comments below the sanity threshold can continue to appear freely in the previous Joy Christian thread (which already has a record-setting number of comments…).

Update (May 11): A commenter pointed me to a beautiful preprint by James Owen Weatherall, which tries sympathetically to make as much sense as possible out of Joy Christian’s ideas, and then carefully explains why the attempt fails (long story short: because of Bell’s theorem!).  Notice the contrast between the precision and clarity of Weatherall’s prose—the way he defines and justifies each concept before using it—and the obscurity of Christian’s prose.

Another Update: Over on the previous Joy Christian thread, some commenters are now using an extremely amusing term for people who believe that theories in physics ought to say something comprehensible about the predicted outcomes of physics experiments.  The term: “computer nerd.”

Third Update: Quite a few commenters seem to assume that I inappropriately used my blog to “pick a fight” with poor defenseless Joy Christian, who was minding his own business disproving and re-disproving Bell’s Theorem.  So let me reiterate that I wasn’t looking for this confrontation, and in fact took great pains to avoid it for six years, even as Joy became more and more vocal.  It was Joy, not me, who finally forced matters to a head through his absurd demand that I pay him $100,000 “with interest,” and then his subsequent attacks. ### Bell’s-inequality-denialist Joy Christian offers me$200K if scalable quantum computers are built

Wednesday, May 2nd, 2012

Joy Christian is the author of numerous papers claiming to disprove Bell’s theorem.  Yes, that Bell’s theorem: the famous result from the 1960s showing that no local hidden variable theory can reproduce all predictions of quantum mechanics for entangled states of two particles.  Here a “local hidden variable theory” means—and has always meant—a theory where Alice gets some classical information x, Bob gets some other classical information y (generally correlated with x), then Alice and Bob choose which respective experiments to perform, and finally Alice sees a measurement outcome that’s a function only of her choice and of x (not of Bob’s choice or his measurement outcome), and Bob sees a measurement outcome that’s a function only of his choice and of y.  In modern terms, Bell, with simplifications by Clauser et al., gave an example of a game that Alice and Bob can win at most 75% of the time under any local hidden variable theory (that’s the Bell inequality), but can win 85% of the time by measuring their respective halves of an entangled state (that’s the Bell inequality violation).  The proofs are quite easy, both for the inequality and for its violation by quantum mechanics.  Check out this problem set for the undergrad course I’m currently teaching if you’d like to be led through the proof yourself (it’s problem 7).

In case you’re wondering: no, Bell’s Theorem has no more been “disproved” than the Cauchy-Schwarz Inequality, and it will never be, even if papers claiming otherwise are stacked to the moon.  Like Gödel’s and Cantor’s Theorems, Bell’s Theorem has long been a lightning rod for incomprehension and even anger; I saw another “disproof” at a conference in 2003, and will doubtless see more in the future.  The disproofs invariably rely on personal reinterpretations of the perfectly-clear concept of “local hidden variables,” to smuggle in what would normally be called non-local variables.  That smuggling is accompanied by mathematical sleight-of-hand (the more, the better) to disguise the ultimately trivial error.

While I’d say the above—loudly, even—to anyone who asked, I also declined several requests to write a blog post about Joy Christian and his mistakes.  His papers had already been refuted ad nauseam by others (incidentally, I find myself in complete agreement with Luboš Motl on this one!), and I saw no need to pile on the poor dude.  Having met him, at the Perimeter Institute and at several conferences, I found something poignant and even touching about Joy’s joyless quest.  I mean, picture a guy who made up his mind at some point that, let’s say, √2 is actually a rational number, all the mathematicians having been grievously wrong for millennia—and then unironically held to that belief his entire life, heroically withstanding the batterings of reason.  Show him why 2=A2/B2 has no solution in positive integers A,B, and he’ll answer that you haven’t understood the very concept of rational number as deeply as him.  Ask him what he means by “rational number,” and you’ll quickly enter the territory of the Monty Python dead parrot sketch.  So why not just leave this dead parrot where it lies?

Anyway, that’s what I was perfectly content to do, until Monday, when Joy left the following comment on my “Whether or not God plays dice, I do” post:

Scott,
You owe me 100,000 US Dollars plus five years of interest. In 2007, right under your nose (when you and I were both visiting Perimeter Institute), I demonstrated, convincing to me, that scalable quantum computing is impossible in the physical world.

He included a link to his book, in case I wanted to review his arguments against the reality of entanglement.  I have to confess I had no idea that, besides disproving Bell’s theorem, Joy had also proved the impossibility of scalable quantum computing.  Based on his previous work, I would have expected him to say that, sure, quantum computers could quickly factor 10,000-digit numbers, but nothing about that would go beyond ordinary, classical, polynomial-time Turing machines—because Turing himself got the very definition of Turing machines wrong, by neglecting topological octonion bivectors or something.

Be that as it may, Joy then explained that the purpose of his comment was to show that

there is absolutely nothing that would convince you to part with your 100,000. You know that, and everyone else knows that … The whole thing is just a smug scam to look smarter than the rest of us without having to do the hard work. Good luck with that.

In response, I clarified what it would take to win my bet:

As I’ve said over and over, what would be necessary and sufficient would be to convince the majority of the physics community. Do you hope and expect to do that? If so, then you can expect my \$100,000; if not, then not. If a scientific revolution has taken place only inside the revolutionary’s head, then let the monetary rewards be likewise confined to his head.

Joy replied:

[L]et us forget about my work. It is not for you. Instead, let me make a counter offer to you. I will give you 200,000 US dollars the day someone produces an actual, working, quantum computer in a laboratory recognizable by me. If I am still alive, I will send you 200,000 US Dollars, multiplied by an appropriate inflation factor. Go build a quantum computer.

I’m grateful to Joy for his exceedingly generous offer.  But let’s forget about money for now.  Over the past few months, I’ve had a real insight: the most exciting potential application of scalable quantum computers is neither breaking RSA, nor simulating quantum physics, nor Grover’s algorithm, nor adiabatic optimization.  Instead, it’s watching the people who said it was impossible try to explain themselves.  That prospect, alone, would more than justify a Manhattan-project-scale investment in this field.

Postscript. If you want something about quantum foundations and hidden-variable theories of a bit more scientific interest, check out this MathOverflow question I asked on Monday, which was answered within one day by George Lowther (I then carefully wrote up the solution he sketched).

Updates (May 6). Depending on what sort of entertainment you enjoy, you might want to check out the comments section, where you can witness Joy Christian becoming increasingly unhinged in his personal attacks on me and others (“our very own FQXi genius” – “biased and closed-minded” – “incompetent” – “Scott’s reaction is a textbook case for the sociologists” – “As for Richard Gill, he is evidently an incompetent mathematician” – “I question your own intellectual abilities” – “your entire world view is based on an experimentally unsupported (albeit lucrative) belief and nothing else” – “You have been caught with your pants down and still refusing to see what is below your belly” – “let me point out that you are the lesser brain among the two of us. The pitiful flatness of your brain would be all too painful for everyone to see when my proposed experiment is finally done” – etc., etc).  To which I respond: the flatness of my brain?  Also notable is Joy’s Tourette’s-like repetition of the sentence, “I will accept judgement from no man but Nature.”  Nature is a man?

I just posted a comment explaining the Bell/CHSH inequality in the simplest terms I know, which I’ll repost here for convenience:

Look everyone, consider the following game. Two players, Alice and Bob, can agree on a strategy in advance, but from that point forward, are out of communication with each other (and don’t share quantum entanglement or anything like that). After they’re separated, Alice receives a uniformly-random bit A, and Bob receives another uniformly-random bit B (uncorrelated with A). Their joint goal is for Alice to output a bit X, and Bob to output a bit Y, such that

X + Y = AB (mod 2)

or equivalently,

X XOR Y = A AND B.

They want to succeed with the largest possible probability. It’s clear that one strategy they can follow is always to output X=Y=0, in which case they’ll win 75% of the time (namely, in all four of the cases except A=B=1).

Furthermore, by enumerating all of Alice and Bob’s possible pure strategies and then appealing to convexity, one can check that there’s no strategy that lets them win more than 75% of the time.  In other words, no matter what they do, they lose for one of the four possible (A,B) pairs.

Do you agree with the previous paragraph? If so, then you accept the Bell/CHSH inequality, end of story.

Of all the papers pointing out the errors in Joy Christian’s attempted refutations of the simple arithmetic above, my favorite is Richard Gill’s.  Let me quote from Gill’s eloquent conclusion:

There remains a psychological question, why so strong a need is felt by so many researchers to “disprove Bell” in one way or another? At a rough guess, at least one new proposal comes up per year. Many pass by unnoticed, but from time to time one of them attracts some interest and even media attention. Having studied a number of these proposals in depth, I see two main strategies of would-be Bell-deniers.

The first strategy (the strategy, I would guess, in the case in question) is to build elaborate mathematical models of such complexity and exotic nature that the author him or herself is the probably the only person who ever worked through all the details. Somewhere in the midst of the complexity a simple mistake is made, usually resulting from suppression of an important index or variable. There is a hidden and non-local hidden variable.

The second strategy is to simply build elaborate versions of detection loophole models. Sometimes the same proposal can be interpreted in both ways at the same time, since of course either the mistake or the interpretation as a detection loophole model are both interpretations of the reader, not of the writer.

According to the Anna Karenina principle of evolutionary biology, in order for things to succeed, everything has to go exactly right, while for failure, it suffices if any one of a myriad factors is wrong. Since errors are typically accidental and not recognized, an apparently logical deduction which leads to a manifestly incorrect conclusion does not need to allow a unique diagnosis. If every apparently logical step had been taken with explicit citation of the mathematical rule which was being used, and in a specifi ed context, one could say where the first misstep was taken. But mathematics is almost never written like that, and for good reasons. The writer and the reader, coming from the same scienti c community, share a host of “hidden assumptions” which can safely be taken for granted, as long as no self-contradiction occurs. Saying that the error actually occurred in such-and-such an equation at such-and-such a substitution depends on various assumptions.

The author who still believes in his result will therefore claim that the diagnosis is wrong because the wrong context has been assumed.

We can be grateful for Christian that he has had the generosity to write his one page paper with a more or less complete derivation of his key result in a more or less completely explicit context, without distraction from the author’s intended physical interpretation of the mathematics. The mathematics should stand on its own, the interpretation is “free”.  My fi nding is that in this case, the mathematics does not stand on its own.

Update (5/7): I can’t think of any better illustration than the comment thread below for my maxim that computation is clarity.  In other words, if you can’t explain how to simulate your theory on a computer, chances are excellent that the reason is that your theory makes no sense!  The following comment of mine expands on this point:

The central concept that I find missing from the comments of David Brown, James Putnam, and Thomas Ray is that of the sanity check.

Math and computation are simply the tools of clear thought. For example, if someone tells me that a 4-by-4 array of zorks contains 25 zorks in total, and I respond that 4 times 4 is 16, not 25, I’m not going to be impressed if the person then starts waxing poetic about how much more profound the physics of zorks is than my narrow and restricted notions of “arithmetic”. There must be a way to explain the discrepancy even at a purely arithmetical level. If there isn’t, then the zork theory has failed a basic sanity check, and there’s absolutely no reason to study its details further.

Likewise, the fact that Joy can’t explain how to code a computer simulation of (say) his exploding toy ball experiment that would reproduce his predicted Bell/CHSH violation is extremely revealing. This is also a sanity check, and it’s one that Joy flunks. Granted, if he were able to explain his model clearly enough for well-intentioned people to understand how to program it on a computer, then almost certainly there would be no need to actually run the program! We could probably just calculate what the program did using pencil and paper. Nevertheless, Bram, John Sidles, and others were entirely right to harp on this simulation question, because its real role is as a sanity check. If Joy’s ideas are not meaningless nonsense, then there’s no reason at all why we shouldn’t be able to simulate his experiment on a computer and get exactly the outcome that he predicts. Until Joy passes this minimal sanity check—which he hasn’t—there’s simply no need to engage in deep ruminations like the ones above about physics or philosophy or Joy’s “Theorema Egregious.”