Archive for the ‘Bell’s Theorem? But a Flesh Wound!’ Category

Bell inequality violation finally done right

Tuesday, September 15th, 2015

A few weeks ago, Hensen et al., of the Delft University of Technology and Barcelona, Spain, put out a paper reporting the first experiment that violates the Bell inequality in a way that closes off the two main loopholes simultaneously: the locality and detection loopholes.  Well, at least with ~96% confidence.  This is big news, not only because of the result itself, but because of the advances in experimental technique needed to achieve it.  Last Friday, two renowned experimentalists—Chris Monroe of U. of Maryland and Jungsang Kim of Duke—visited MIT, and in addition to talking about their own exciting ion-trap work, they did a huge amount to help me understand the new Bell test experiment.  So OK, let me try to explain this.

While some people like to make it more complicated, the Bell inequality is the following statement. Alice and Bob are cooperating with each other to win a certain game (the “CHSH game“) with the highest possible probability. They can agree on a strategy and share information and particles in advance, but then they can’t communicate once the game starts. Alice gets a uniform random bit x, and Bob gets a uniform random bit y (independent of x).  Their goal is to output bits, a and b respectively, such that a XOR b = x AND y: in other words, such that a and b are different if and only if x and y are both 1.  The Bell inequality says that, in any universe that satisfies the property of local realism, no matter which strategy they use, Alice and Bob can win the game at most 75% of the time (for example, by always outputting a=b=0).

What does local realism mean?  It means that, after she receives her input x, any experiment Alice can perform in her lab has a definite result that might depend on x, on the state of her lab, and on whatever information she pre-shared with Bob, but at any rate, not on Bob’s input y.  If you like: a=a(x,w) is a function of x and of the information w available before the game started, but is not a function of y.  Likewise, b=b(y,w) is a function of y and w, but not of x.  Perhaps the best way to explain local realism is that it’s the thing you believe in, if you believe all the physicists babbling about “quantum entanglement” just missed something completely obvious.  Clearly, at the moment two “entangled” particles are created, but before they separate, one of them flips a tiny coin and then says to the other, “listen, if anyone asks, I’ll be spinning up and you’ll be spinning down.”  Then the naïve, doofus physicists measure one particle, find it spinning down, and wonder how the other particle instantly “knows” to be spinning up—oooh, spooky! mysterious!  Anyway, if that’s how you think it has to work, then you believe in local realism, and you must predict that Alice and Bob can win the CHSH game with probability at most 3/4.

What Bell observed in 1964 is that, even though quantum mechanics doesn’t let Alice send a signal to Bob (or vice versa) faster than the speed of light, it still makes a prediction about the CHSH game that conflicts with local realism.  (And thus, quantum mechanics exhibits what one might not have realized beforehand was even a logical possibility: it doesn’t allow communication faster than light, but simulating the predictions of quantum mechanics in a classical universe would require faster-than-light communication.)  In particular, if Alice and Bob share entangled qubits, say $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}},$$ then there’s a simple protocol that lets them violate the Bell inequality, winning the CHSH game ~85% of the time (with probability (1+1/√2)/2 > 3/4).  Starting in the 1970s, people did experiments that vindicated the prediction of quantum mechanics, and falsified local realism—or so the story goes.

The violation of the Bell inequality has a schizophrenic status in physics.  To many of the physicists I know, Nature’s violating the Bell inequality is so trivial and obvious that it’s barely even worth doing the experiment: if people had just understood and believed Bohr and Heisenberg back in 1925, there would’ve been no need for this whole tiresome discussion.  To others, however, the Bell inequality violation remains so unacceptable that some way must be found around it—from casting doubt on the experiments that have been done, to overthrowing basic presuppositions of science (e.g., our own “freedom” to generate random bits x and y to send to Alice and Bob respectively).

For several decades, there was a relatively conservative way out for local realist diehards, and that was to point to “loopholes”: imperfections in the existing experiments which meant that local realism was still theoretically compatible with the results, at least if one was willing to assume a sufficiently strange conspiracy.

Fine, you interject, but surely no one literally believed these little experimental imperfections would be the thing that would rescue local realism?  Not so fast.  Right here, on this blog, I’ve had people point to the loopholes as a reason to accept local realism and reject the reality of quantum entanglement.  See, for example, the numerous comments by Teresa Mendes in my Whether Or Not God Plays Dice, I Do post.  Arguing with Mendes back in 2012, I predicted that the two main loopholes would both be closed in a single experiment—and not merely eventually, but in, like, a decade.  I was wrong: achieving this milestone took only a few years.

Before going further, let’s understand what the two main loopholes are (or rather, were).

The locality loophole arises because the measuring process takes time and Alice and Bob are not infinitely far apart.  Thus, suppose that, the instant Alice starts measuring her particle, a secret signal starts flying toward Bob’s particle at the speed of light, revealing her choice of measurement setting (i.e., the value of x).  Likewise, the instant Bob starts measuring his particle, his doing so sends a secret signal flying toward Alice’s particle, revealing the value of y.  By the time the measurements are finished, a few microseconds later, there’s been plenty of time for the two particles to coordinate their responses to the measurements, despite being “classical under the hood.”

Meanwhile, the detection loophole arises because in practice, measurements of entangled particles—especially of photons—don’t always succeed in finding the particles, let alone ascertaining their properties.  So one needs to select those runs of the experiment where Alice and Bob both find the particles, and discard all the “bad” runs where they don’t.  This by itself wouldn’t be a problem, if not for the fact that the very same measurement that reveals whether the particles are there, is also the one that “counts” (i.e., where Alice and Bob feed x and y and get out a and b)!

To someone with a conspiratorial mind, this opens up the possibility that the measurement’s success or failure is somehow correlated with its result, in a way that could violate the Bell inequality despite there being no real entanglement.  To illustrate, suppose that at the instant they’re created, one entangled particle says to the other: “listen, if Alice measures me in the x=0 basis, I’ll give the a=1 result.  If Bob measures you in the y=1 basis, you give the b=1 result.  In any other case, we’ll just evade detection and count this run as a loss.”  In such a case, Alice and Bob will win the game with certainty, whenever it gets played at all—but that’s only because of the particles’ freedom to choose which rounds will count.  Indeed, by randomly varying their “acceptable” x and y values from one round to the next, the particles can even make it look like x and y have no effect on the probability of a round’s succeeding.

Until a month ago, the state-of-the-art was that there were experiments that closed the locality loophole, and other experiments that closed the detection loophole, but there was no single experiment that closed both of them.

To close the locality loophole, “all you need” is a fast enough measurement on photons that are far enough apart.  That way, even if the vast Einsteinian conspiracy is trying to send signals between Alice’s and Bob’s particles at the speed of light, to coordinate the answers classically, the whole experiment will be done before the signals can possibly have reached their destinations.  Admittedly, as Nicolas Gisin once pointed out to me, there’s a philosophical difficulty in defining what we mean by the experiment being “done.”  To some purists, a Bell experiment might only be “done” once the results (i.e., the values of a and b) are registered in human experimenters’ brains!  And given the slowness of human reaction times, this might imply that a real Bell experiment ought to be carried out with astronauts on faraway space stations, or with Alice on the moon and Bob on earth (which, OK, would be cool).  If we’re being reasonable, however, we can grant that the experiment is “done” once a and b are safely recorded in classical, macroscopic computer memories—in which case, given the speed of modern computer memories, separating Alice and Bob by half a kilometer can be enough.  And indeed, experiments starting in 1998 (see for example here) have done exactly that; the current record, unless I’m mistaken, is 18 kilometers.  (Update: I was mistaken; it’s 144 kilometers.)  Alas, since these experiments used hard-to-measure photons, they were still open to the detection loophole.

To close the detection loophole, the simplest approach is to use entangled qubits that (unlike photons) are slow and heavy and can be measured with success probability approaching 1.  That’s exactly what various groups did starting in 2001 (see for example here), with trapped ions, superconducting qubits, and other systems.  Alas, given current technology, these sorts of qubits are virtually impossible to move miles apart from each other without decohering them.  So the experiments used qubits that were close together, leaving the locality loophole wide open.

So the problem boils down to: how do you create long-lasting, reliably-measurable entanglement between particles that are very far apart (e.g., in separate labs)?  There are three basic ideas in Hensen et al.’s solution to this problem.

The first idea is to use a hybrid system.  Ultimately, Hensen et al. create entanglement between electron spins in nitrogen vacancy centers in diamond (one of the hottest—or coolest?—experimental quantum information platforms today), in two labs that are about a mile away from each other.  To get these faraway electron spins to talk to each other, they make them communicate via photons.  If you stimulate an electron, it’ll sometimes emit a photon with which it’s entangled.  Very occasionally, the two electrons you care about will even emit photons at the same time.  In those cases, by routing those photons into optical fibers and then measuring the photons, it’s possible to entangle the electrons.

Wait, what?  How does measuring the photons entangle the electrons from whence they came?  This brings us to the second idea, entanglement swapping.  The latter is a famous procedure to create entanglement between two particles A and B that have never interacted, by “merely” entangling A with another particle A’, entangling B with another particle B’, and then performing an entangled measurement on A’ and B’ and conditioning on its result.  To illustrate, consider the state

$$ \frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}} \otimes \frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}} $$

and now imagine that we project the first and third qubits onto the state $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}}.$$

If the measurement succeeds, you can check that we’ll be left with the state $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}}$$ in the second and fourth qubits, even though those qubits were not entangled before.

So to recap: these two electron spins, in labs a mile away from each other, both have some probability of producing a photon.  The photons, if produced, are routed to a third site, where if they’re both there, then an entangled measurement on both of them (and a conditioning on the results of that measurement) has some nonzero probability of causing the original electron spins to become entangled.

But there’s a problem: if you’ve been paying attention, all we’ve done is cause the electron spins to become entangled with some tiny, nonzero probability (something like 6.4×10-9 in the actual experiment).  So then, why is this any improvement over the previous experiments, which just directly measured faraway entangled photons, and also had some small but nonzero probability of detecting them?

This leads to the third idea.  The new setup is an improvement because, whenever the photon measurement succeeds, we know that the electron spins are there and that they’re entangled, without having to measure the electron spins to tell us that.  In other words, we’ve decoupled the measurement that tells us whether we succeeded in creating an entangled pair, from the measurement that uses the entangled pair to violate the Bell inequality.  And because of that decoupling, we can now just condition on the runs of the experiment where the entangled pair was there, without worrying that that will open up the detection loophole, biasing the results via some bizarre correlated conspiracy.  It’s as if the whole experiment were simply switched off, except for those rare lucky occasions when an entangled spin pair gets created (with its creation heralded by the photons).  On those rare occasions, Alice and Bob swing into action, measuring their respective spins within the brief window of time—about 4 microseconds—allowed by the locality loophole, seeking an additional morsel of evidence that entanglement is real.  (Well, actually, Alice and Bob swing into action regardless; they only find out later whether this was one of the runs that “counted.”)

So, those are the main ideas (as well as I understand them); then there’s lots of engineering.  In their setup, Hensen et al. were able to create just a few heralded entangled pairs per hour.  This allowed them to produce 245 CHSH games for Alice and Bob to play, and to reject the hypothesis of local realism at ~96% confidence.  Jungsang Kim explained to me that existing technologies could have produced many more events per hour, and hence, in a similar amount of time, “particle physics” (5σ or more) rather than “psychology” (2σ) levels of confidence that local realism is false.  But in this type of experiment, everything is a tradeoff.  Building not one but two labs for manipulating NV centers in diamond is extremely onerous, and Hensen et al. did what they had to do to get a significant result.

The basic idea here, of using photons to entangle longer-lasting qubits, is useful for more than pulverizing local realism.  In particular, the idea is a major part of current proposals for how to build a scalable ion-trap quantum computer.  Because of cross-talk, you can’t feasibly put more than 10 or so ions in the same trap while keeping all of them coherent and controllable.  So the current ideas for scaling up involve having lots of separate traps—but in that case, one will sometimes need to perform a Controlled-NOT, or some other 2-qubit gate, between a qubit in one trap and a qubit in another.  This can be achieved using the Gottesman-Chuang technique of gate teleportation, provided you have reliable entanglement between the traps.  But how do you create such entanglement?  Aha: the current idea is to entangle the ions by using photons as intermediaries, very similar in spirit to what Hensen et al. do.

At a more fundamental level, will this experiment finally convince everyone that local realism is dead, and that quantum mechanics might indeed be the operating system of reality?  Alas, I predict that those who confidently predicted that a loophole-free Bell test could never be done, will simply find some new way to wiggle out, without admitting the slightest problem for their previous view.  This prediction, you might say, is based on a different kind of realism.

Randomness Rules in Quantum Mechanics

Monday, June 16th, 2014

So, Part II of my two-part series for American Scientist magazine about how to recognize random numbers is now out.  This part—whose original title was the one above, but was changed to “Quantum Randomness” to fit the allotted space—is all about quantum mechanics and the Bell inequality, and their use in generating “Einstein-certified random numbers.”  I discuss the CHSH game, the Free Will Theorem, and Gerard ‘t Hooft’s “superdeterminism” (just a bit), before explaining the striking recent protocols of Colbeck, Pironio et al., Vazirani and Vidick, Couldron and Yuen, and Miller and Shi, all of which expand a short random seed into additional random bits that are “guaranteed to be random unless Nature resorted to faster-than-light communication to bias them.”  I hope you like it.

[Update: See here for Hacker News thread]

In totally unrelated news, President Obama’s commencement speech at UC Irvine, about climate change and the people who still deny its reality, is worth reading.

Collaborative Refutation

Monday, February 4th, 2013

At least eight people—journalists, colleagues, blog readers—have now asked my opinion of a recent paper by Ross Anderson and Robert Brady, entitled “Why quantum computing is hard and quantum cryptography is not provably secure.”  Where to begin?

  1. Based on a “soliton” model—which seems to be almost a local-hidden-variable model, though not quite—the paper advances the prediction that quantum computation will never be possible with more than 3 or 4 qubits.  (Where “3 or 4” are not just convenient small numbers, but actually arise from the geometry of spacetime.)  I wonder: before uploading their paper, did the authors check whether their prediction was, y’know, already falsified?  How do they reconcile their proposal with (for example) the 8-qubit entanglement observed by Haffner et al. with trapped ions—not to mention the famous experiments with superconducting Josephson junctions, buckyballs, and so forth that have demonstrated the reality of entanglement among many thousands of particles (albeit not yet in a “controllable” form)?
  2. The paper also predicts that, even with 3 qubits, general entanglement will only be possible if the qubits are not collinear; with 4 qubits, general entanglement will only be possible if the qubits are not coplanar.  Are the authors aware that, in ion-trap experiments (like those of David Wineland that recently won the Nobel Prize), the qubits generally are arranged in a line?  See for example this paper, whose abstract reads in part: “Here we experimentally demonstrate quantum error correction using three beryllium atomic-ion qubits confined to a linear, multi-zone trap.”
  3. Finally, the paper argues that, because entanglement might not be a real phenomenon, the security of quantum key distribution remains an open question.  Again: are the authors aware that the most practical QKD schemes, like BB84, never use entanglement at all?  And that therefore, even if the paper’s quasi-local-hidden-variable model were viable (which it’s not), it still wouldn’t justify the claim in the title that “…quantum cryptography is not provably secure”?

Yeah, this paper is pretty uninformed even by the usual standards of attempted quantum-mechanics-overthrowings.  Let me now offer three more general thoughts.

First thought: it’s ironic that I’m increasingly seeing eye-to-eye with Lubos Motl—who once called me “the most corrupt piece of moral trash”—in his rantings against the world’s “anti-quantum-mechanical crackpots.”  Let me put it this way: David Deutsch, Chris Fuchs, Sheldon Goldstein, and Roger Penrose hold views about quantum mechanics that are diametrically opposed to one another’s.  Yet each of these very different physicists has earned my admiration, because each, in his own way, is trying to listen to whatever quantum mechanics is saying about how the world works.  However, there are also people all of whose “thoughts” about quantum mechanics are motivated by the urge to plug their ears and shut out whatever quantum mechanics is saying—to show how whatever naïve ideas they had before learning QM might still be right, and how all the experiments of the last century that seem to indicate otherwise might still be wiggled around.  Like monarchists or segregationists, these people have been consistently on the losing side of history for generations—so it’s surprising, to someone like me, that they continue to show up totally unfazed and itching for battle, like the knight from Monty Python and the Holy Grail with his arms and legs hacked off.  (“Bell’s Theorem?  Just a flesh wound!”)

Like any physical theory, of course quantum mechanics might someday be superseded by an even deeper theory.  If and when that happens, it will rank alongside Newton’s apple, Einstein’s elevator, and the discovery of QM itself among the great turning points in the history of physics.  But it’s crucial to understand that that’s not what we’re discussing here.  Here we’re discussing the possibility that quantum mechanics is wrong, not for some deep reason, but for a trivial reason that was somehow overlooked since the 1920s—that there’s some simple classical model that would make everyone exclaim,  “oh!  well, I guess that whole framework of exponentially-large Hilbert space was completely superfluous, then.  why did anyone ever imagine it was needed?”  And the probability of that is comparable to the probability that the Moon is made of Gruyère.  If you’re a Bayesian with a sane prior, stuff like this shouldn’t even register.

Second thought: this paper illustrates, better than any other I’ve seen, how despite appearances, the “quantum computing will clearly be practical in a few years!” camp and the “quantum computing is clearly impossible!” camp aren’t actually opposed to each other.  Instead, they’re simply two sides of the same coin.  Anderson and Brady start from the “puzzling” fact that, despite what they call “the investment of tremendous funding resources worldwide” over the last decade, quantum computing still hasn’t progressed beyond a few qubits, and propose to overthrow quantum mechanics as a way to resolve the puzzle.  To me, this is like arguing in 1835 that, since Charles Babbage still hasn’t succeeded in building a scalable classical computer, we need to rewrite the laws of physics in order to explain why classical computing is impossible.  I.e., it’s a form of argument that only makes sense if you’ve adopted what one might call the “Hype Axiom”: the axiom that any technology that’s possible sometime in the future, must in fact be possible within the next few years.

Third thought: it’s worth noting that, if (for example) you found Michel Dyakonov’s arguments against QC (discussed on this blog a month ago) persuasive, then you shouldn’t find Anderson’s and Brady’s persuasive, and vice versa.  Dyakonov agrees that scalable QC will never work, but he ridicules the idea that we’d need to modify quantum mechanics itself to explain why.  Anderson and Brady, by contrast, are so eager to modify QM that they don’t mind contradicting a mountain of existing experiments.  Indeed, the question occurs to me of whether there’s any pair of quantum computing skeptics whose arguments for why QC can’t work are compatible with one another’s.  (Maybe Alicki and Dyakonov?)

But enough of this.  The truth is that, at this point in my life, I find it infinitely more interesting to watch my two-week-old daughter Lily, as she discovers the wonderful world of shapes, colors, sounds, and smells, than to watch Anderson and Brady, as they fail to discover the wonderful world of many-particle quantum mechanics.  So I’m issuing an appeal to the quantum computing and information community.  Please, in the comments section of this post, explain what you thought of the Anderson-Brady paper.  Don’t leave me alone to respond to this stuff; I don’t have the time or the energy.  If you get quantum probability, then stand up and be measured!

I was wrong about Joy Christian

Thursday, May 10th, 2012

Update: I decided to close comments on this post and the previous Joy Christian post, because they simply became too depressing for me.

I’ve further decided to impose a moratorium, on this blog, on all discussions about the validity of quantum mechanics in the microscopic realm, the reality of quantum entanglement, or the correctness of theorems such as Bell’s Theorem.  I might lift the moratorium at some future time.  For now, though, life simply feels too short to me, and the actually-interesting questions too numerous.  Imagine, for example, that there existed a devoted band of crackpots who believed, for complicated, impossible-to-pin-down reasons of topology and geometric algebra, that triangles actually have five corners.  These crackpots couldn’t be persuaded by rational argument—indeed, they didn’t even use words and sentences the same way you do, to convey definite meaning.  And crucially, they had infinite energy: you could argue with them for weeks, and they would happily argue back, until you finally threw up your hands in despair for all humanity, at which point the crackpots would gleefully declare, “haha, we won!  the silly ‘triangles have 3 corners’ establishment cabal has admitted defeat!”  And, in a sense, they would have won: with one or two exceptions, the vast majority who know full well how many corners a triangle has simply never showed up to the debate, thereby conceding to the 5-cornerists by default.

What would you in such a situation?  What would you do?  If you figure it out, please let me know (but by email, not by blog comment).

In response to my post criticizing his “disproof” of Bell’s Theorem, Joy Christian taunted me that “all I knew was words.”  By this, he meant that my criticisms were entirely based on circumstantial evidence, for example that (1) Joy clearly didn’t understand what the word “theorem” even meant, (2) every other sentence he uttered contained howling misconceptions, (3) his papers were written in an obscure, “crackpot” way, and (4) several people had written very clear papers pointing out mathematical errors in his work, to which Joy had responded only with bluster.  But I hadn’t actually studied Joy’s “work” at a technical level.  Well, yesterday I finally did, and I confess that I was astonished by what I found.  Before, I’d actually given Joy some tiny benefit of the doubt—possibly misled by the length and semi-respectful tone of the papers refuting his claims.  I had assumed that Joy’s errors, though ultimately trivial (how could they not be, when he’s claiming to contradict such a well-understood fact provable with a few lines of arithmetic?), would nevertheless be artfully concealed, and would require some expertise in geometric algebra to spot.  I’d also assumed that of course Joy would have some well-defined hidden-variable model that reproduced the quantum-mechanical predictions for the Bell/CHSH experiment (how could he not?), and that the “only” problem would be that, due to cleverly-hidden mistakes, his model would be subtly nonlocal.

What I actually found was a thousand times worse: closer to the stuff freshmen scrawl on an exam when they have no clue what they’re talking about but are hoping for a few pity points.  It’s so bad that I don’t understand how even Joy’s fellow crackpots haven’t laughed this off the stage.  Look, Joy has a hidden variable λ, which is either 1 or -1 uniformly at random.  He also has a measurement choice a of Alice, and a measurement choice b of Bob.  He then defines Alice and Bob’s measurement outcomes A and B via the following functions:

A(a,λ) = something complicated = (as Joy correctly observes) λ

B(b,λ) = something complicated = (as Joy correctly observes) -λ

I shit you not.  A(a,λ) = λ, and B(b,λ) = -λ.  Neither A nor B has any dependence on the choices of measurement a and b, and the complicated definitions that he gives for them turn out to be completely superfluous.  No matter what measurements are made, A and B are always perfectly anticorrelated with each other.

You might wonder: what could lead anyone—no matter how deluded—even to think such a thing could violate the Bell/CHSH inequalities?  Aha, Joy says you only ask such a naïve question because, lacking his deep topological insight, you make the rookie mistake of looking at the actual outcomes that his model actually predicts for the actual measurements that are actually made.  What you should do, instead, is compute a “correlation function” E(a,b) that’s defined by dividing A(a,λ)B(b,λ) by a “normalizing factor” that’s a product of the quaternions a and b, with a divided on the left and b divided on the right.  Joy seems to have obtained this “normalizing factor” via the technique of pulling it out of his rear end.  Now, as Gill shows, Joy actually makes an algebra mistake while computing his nonsensical “correlation function.”  The answer should be -a.b-a×b, not -a.b.  But that’s truthfully beside the point.  It’s as if someone announced his revolutionary discovery that P=NP implies N=1, and then critics soberly replied that, no, the equation P=NP can also be solved by P=0.

So, after 400+ comments on my previous thread—including heady speculations about M-theory, the topology of spacetime, the Copenhagen interpretation, continuity versus discreteness, etc., as well numerous comparisons to Einstein—this is what it boils down to.  A(a,λ) = λ and B(b,λ) = -λ.

I call on FQXi, in the strongest possible terms, to stop lending its legitimacy to this now completely-unmasked charlatan.  If it fails to do so, then I will resign from FQXi, and will encourage fellow FQXi members to do the same.

While I don’t know the exact nature of Joy’s relationship to Oxford University or to the Perimeter Institute, I also call on those institutions to sever any connections they still have with him.

Finally, with this post I’m going to try a new experiment.  I will allow comments through the moderation filter if, and only if, they exceed a minimum threshold of sanity and comprehensibility, and do not randomly throw around terms like “M-theory” with no apparent understanding of what they mean.  Comments below the sanity threshold can continue to appear freely in the previous Joy Christian thread (which already has a record-setting number of comments…).

Update (May 11): A commenter pointed me to a beautiful preprint by James Owen Weatherall, which tries sympathetically to make as much sense as possible out of Joy Christian’s ideas, and then carefully explains why the attempt fails (long story short: because of Bell’s theorem!).  Notice the contrast between the precision and clarity of Weatherall’s prose—the way he defines and justifies each concept before using it—and the obscurity of Christian’s prose.

Another Update: Over on the previous Joy Christian thread, some commenters are now using an extremely amusing term for people who believe that theories in physics ought to say something comprehensible about the predicted outcomes of physics experiments.  The term: “computer nerd.”

Third Update: Quite a few commenters seem to assume that I inappropriately used my blog to “pick a fight” with poor defenseless Joy Christian, who was minding his own business disproving and re-disproving Bell’s Theorem.  So let me reiterate that I wasn’t looking for this confrontation, and in fact took great pains to avoid it for six years, even as Joy became more and more vocal.  It was Joy, not me, who finally forced matters to a head through his absurd demand that I pay him $100,000 “with interest,” and then his subsequent attacks.

Bell’s-inequality-denialist Joy Christian offers me $200K if scalable quantum computers are built

Wednesday, May 2nd, 2012

Joy Christian is the author of numerous papers claiming to disprove Bell’s theorem.  Yes, that Bell’s theorem: the famous result from the 1960s showing that no local hidden variable theory can reproduce all predictions of quantum mechanics for entangled states of two particles.  Here a “local hidden variable theory” means—and has always meant—a theory where Alice gets some classical information x, Bob gets some other classical information y (generally correlated with x), then Alice and Bob choose which respective experiments to perform, and finally Alice sees a measurement outcome that’s a function only of her choice and of x (not of Bob’s choice or his measurement outcome), and Bob sees a measurement outcome that’s a function only of his choice and of y.  In modern terms, Bell, with simplifications by Clauser et al., gave an example of a game that Alice and Bob can win at most 75% of the time under any local hidden variable theory (that’s the Bell inequality), but can win 85% of the time by measuring their respective halves of an entangled state (that’s the Bell inequality violation).  The proofs are quite easy, both for the inequality and for its violation by quantum mechanics.  Check out this problem set for the undergrad course I’m currently teaching if you’d like to be led through the proof yourself (it’s problem 7).

In case you’re wondering: no, Bell’s Theorem has no more been “disproved” than the Cauchy-Schwarz Inequality, and it will never be, even if papers claiming otherwise are stacked to the moon.  Like Gödel’s and Cantor’s Theorems, Bell’s Theorem has long been a lightning rod for incomprehension and even anger; I saw another “disproof” at a conference in 2003, and will doubtless see more in the future.  The disproofs invariably rely on personal reinterpretations of the perfectly-clear concept of “local hidden variables,” to smuggle in what would normally be called non-local variables.  That smuggling is accompanied by mathematical sleight-of-hand (the more, the better) to disguise the ultimately trivial error.

While I’d say the above—loudly, even—to anyone who asked, I also declined several requests to write a blog post about Joy Christian and his mistakes.  His papers had already been refuted ad nauseam by others (incidentally, I find myself in complete agreement with Luboš Motl on this one!), and I saw no need to pile on the poor dude.  Having met him, at the Perimeter Institute and at several conferences, I found something poignant and even touching about Joy’s joyless quest.  I mean, picture a guy who made up his mind at some point that, let’s say, √2 is actually a rational number, all the mathematicians having been grievously wrong for millennia—and then unironically held to that belief his entire life, heroically withstanding the batterings of reason.  Show him why 2=A2/B2 has no solution in positive integers A,B, and he’ll answer that you haven’t understood the very concept of rational number as deeply as him.  Ask him what he means by “rational number,” and you’ll quickly enter the territory of the Monty Python dead parrot sketch.  So why not just leave this dead parrot where it lies?

Anyway, that’s what I was perfectly content to do, until Monday, when Joy left the following comment on my “Whether or not God plays dice, I do” post:

You owe me 100,000 US Dollars plus five years of interest. In 2007, right under your nose (when you and I were both visiting Perimeter Institute), I demonstrated, convincing to me, that scalable quantum computing is impossible in the physical world.

He included a link to his book, in case I wanted to review his arguments against the reality of entanglement.  I have to confess I had no idea that, besides disproving Bell’s theorem, Joy had also proved the impossibility of scalable quantum computing.  Based on his previous work, I would have expected him to say that, sure, quantum computers could quickly factor 10,000-digit numbers, but nothing about that would go beyond ordinary, classical, polynomial-time Turing machines—because Turing himself got the very definition of Turing machines wrong, by neglecting topological octonion bivectors or something.

Be that as it may, Joy then explained that the purpose of his comment was to show that

there is absolutely nothing that would convince you to part with your 100,000. You know that, and everyone else knows that … The whole thing is just a smug scam to look smarter than the rest of us without having to do the hard work. Good luck with that.

In response, I clarified what it would take to win my bet:

As I’ve said over and over, what would be necessary and sufficient would be to convince the majority of the physics community. Do you hope and expect to do that? If so, then you can expect my $100,000; if not, then not. If a scientific revolution has taken place only inside the revolutionary’s head, then let the monetary rewards be likewise confined to his head.

Joy replied:

[L]et us forget about my work. It is not for you. Instead, let me make a counter offer to you. I will give you 200,000 US dollars the day someone produces an actual, working, quantum computer in a laboratory recognizable by me. If I am still alive, I will send you 200,000 US Dollars, multiplied by an appropriate inflation factor. Go build a quantum computer.

I’m grateful to Joy for his exceedingly generous offer.  But let’s forget about money for now.  Over the past few months, I’ve had a real insight: the most exciting potential application of scalable quantum computers is neither breaking RSA, nor simulating quantum physics, nor Grover’s algorithm, nor adiabatic optimization.  Instead, it’s watching the people who said it was impossible try to explain themselves.  That prospect, alone, would more than justify a Manhattan-project-scale investment in this field.

Postscript. If you want something about quantum foundations and hidden-variable theories of a bit more scientific interest, check out this MathOverflow question I asked on Monday, which was answered within one day by George Lowther (I then carefully wrote up the solution he sketched).

Updates (May 6). Depending on what sort of entertainment you enjoy, you might want to check out the comments section, where you can witness Joy Christian becoming increasingly unhinged in his personal attacks on me and others (“our very own FQXi genius” – “biased and closed-minded” – “incompetent” – “Scott’s reaction is a textbook case for the sociologists” – “As for Richard Gill, he is evidently an incompetent mathematician” – “I question your own intellectual abilities” – “your entire world view is based on an experimentally unsupported (albeit lucrative) belief and nothing else” – “You have been caught with your pants down and still refusing to see what is below your belly” – “let me point out that you are the lesser brain among the two of us. The pitiful flatness of your brain would be all too painful for everyone to see when my proposed experiment is finally done” – etc., etc).  To which I respond: the flatness of my brain?  Also notable is Joy’s Tourette’s-like repetition of the sentence, “I will accept judgement from no man but Nature.”  Nature is a man?

I just posted a comment explaining the Bell/CHSH inequality in the simplest terms I know, which I’ll repost here for convenience:

Look everyone, consider the following game. Two players, Alice and Bob, can agree on a strategy in advance, but from that point forward, are out of communication with each other (and don’t share quantum entanglement or anything like that). After they’re separated, Alice receives a uniformly-random bit A, and Bob receives another uniformly-random bit B (uncorrelated with A). Their joint goal is for Alice to output a bit X, and Bob to output a bit Y, such that

X + Y = AB (mod 2)

or equivalently,


They want to succeed with the largest possible probability. It’s clear that one strategy they can follow is always to output X=Y=0, in which case they’ll win 75% of the time (namely, in all four of the cases except A=B=1).

Furthermore, by enumerating all of Alice and Bob’s possible pure strategies and then appealing to convexity, one can check that there’s no strategy that lets them win more than 75% of the time.  In other words, no matter what they do, they lose for one of the four possible (A,B) pairs.

Do you agree with the previous paragraph? If so, then you accept the Bell/CHSH inequality, end of story.

Of all the papers pointing out the errors in Joy Christian’s attempted refutations of the simple arithmetic above, my favorite is Richard Gill’s.  Let me quote from Gill’s eloquent conclusion:

There remains a psychological question, why so strong a need is felt by so many researchers to “disprove Bell” in one way or another? At a rough guess, at least one new proposal comes up per year. Many pass by unnoticed, but from time to time one of them attracts some interest and even media attention. Having studied a number of these proposals in depth, I see two main strategies of would-be Bell-deniers.

The first strategy (the strategy, I would guess, in the case in question) is to build elaborate mathematical models of such complexity and exotic nature that the author him or herself is the probably the only person who ever worked through all the details. Somewhere in the midst of the complexity a simple mistake is made, usually resulting from suppression of an important index or variable. There is a hidden and non-local hidden variable.

The second strategy is to simply build elaborate versions of detection loophole models. Sometimes the same proposal can be interpreted in both ways at the same time, since of course either the mistake or the interpretation as a detection loophole model are both interpretations of the reader, not of the writer.

According to the Anna Karenina principle of evolutionary biology, in order for things to succeed, everything has to go exactly right, while for failure, it suffices if any one of a myriad factors is wrong. Since errors are typically accidental and not recognized, an apparently logical deduction which leads to a manifestly incorrect conclusion does not need to allow a unique diagnosis. If every apparently logical step had been taken with explicit citation of the mathematical rule which was being used, and in a specifi ed context, one could say where the first misstep was taken. But mathematics is almost never written like that, and for good reasons. The writer and the reader, coming from the same scienti c community, share a host of “hidden assumptions” which can safely be taken for granted, as long as no self-contradiction occurs. Saying that the error actually occurred in such-and-such an equation at such-and-such a substitution depends on various assumptions.

The author who still believes in his result will therefore claim that the diagnosis is wrong because the wrong context has been assumed.

We can be grateful for Christian that he has had the generosity to write his one page paper with a more or less complete derivation of his key result in a more or less completely explicit context, without distraction from the author’s intended physical interpretation of the mathematics. The mathematics should stand on its own, the interpretation is “free”.  My fi nding is that in this case, the mathematics does not stand on its own.

Update (5/7): I can’t think of any better illustration than the comment thread below for my maxim that computation is clarity.  In other words, if you can’t explain how to simulate your theory on a computer, chances are excellent that the reason is that your theory makes no sense!  The following comment of mine expands on this point:

The central concept that I find missing from the comments of David Brown, James Putnam, and Thomas Ray is that of the sanity check.

Math and computation are simply the tools of clear thought. For example, if someone tells me that a 4-by-4 array of zorks contains 25 zorks in total, and I respond that 4 times 4 is 16, not 25, I’m not going to be impressed if the person then starts waxing poetic about how much more profound the physics of zorks is than my narrow and restricted notions of “arithmetic”. There must be a way to explain the discrepancy even at a purely arithmetical level. If there isn’t, then the zork theory has failed a basic sanity check, and there’s absolutely no reason to study its details further.

Likewise, the fact that Joy can’t explain how to code a computer simulation of (say) his exploding toy ball experiment that would reproduce his predicted Bell/CHSH violation is extremely revealing. This is also a sanity check, and it’s one that Joy flunks. Granted, if he were able to explain his model clearly enough for well-intentioned people to understand how to program it on a computer, then almost certainly there would be no need to actually run the program! We could probably just calculate what the program did using pencil and paper. Nevertheless, Bram, John Sidles, and others were entirely right to harp on this simulation question, because its real role is as a sanity check. If Joy’s ideas are not meaningless nonsense, then there’s no reason at all why we shouldn’t be able to simulate his experiment on a computer and get exactly the outcome that he predicts. Until Joy passes this minimal sanity check—which he hasn’t—there’s simply no need to engage in deep ruminations like the ones above about physics or philosophy or Joy’s “Theorema Egregious.”