## Randomness Rules in Quantum Mechanics

So, Part II of my two-part series for *American Scientist* magazine about how to recognize random numbers is now out. This part—whose original title was the one above, but was changed to “Quantum Randomness” to fit the allotted space—is all about quantum mechanics and the Bell inequality, and their use in generating “Einstein-certified random numbers.” I discuss the CHSH game, the Free Will Theorem, and Gerard ‘t Hooft’s “superdeterminism” (just a bit), before explaining the striking recent protocols of Colbeck, Pironio et al., Vazirani and Vidick, Couldron and Yuen, and Miller and Shi, all of which expand a short random seed into additional random bits that are “guaranteed to be random unless Nature resorted to faster-than-light communication to bias them.” I hope you like it.

[Update: See here for Hacker News thread]

In totally unrelated news, President Obama’s commencement speech at UC Irvine, about climate change and the people who still deny its reality, is worth reading.

Comment #1 June 16th, 2014 at 3:20 pm

Climate has been changing for billions of years, back and forth, back and forth. Now you see the ice age, now you don’t. All the proxy measures we have for it indicate that. How can anyone deny that reality? Or is “climate change” being disingenuously used in this case as some sort of non-literal code-phrase for preaching catastrophic man-made over-heating, and if so why?

Comment #2 June 16th, 2014 at 4:53 pm

I’d love to read you and esr duke it out on AGW.

Comment #3 June 16th, 2014 at 5:18 pm

Vitruvius, “climate change” is a pretty standard term for the

largeclimate changes within the past century—and the larger ones projected for the next century—that have occurred, are occurring, and will occur as the direct result of human activity. “Anthropogenic climate change” is more precise. The fact that the climate was extremely different in, e.g., the ice ages or the Jurassic is unfortunately not a great comfort for human civilization, which developed during a ~10,000-year window during which the climate was much more stable, which window is now ending as the result of human activity.Comment #4 June 16th, 2014 at 5:23 pm

Ok, Scott, let’s say I agree to go along with the “climate change” phrase, at least for the purpose of conversation, even though I feel it’s skating on thin ice that’s too close to the open waters inhabited by dangerous bunco artists. After all, I’m trying to be pragmatic, not dogmatic, so we’ll let it go for now.

With that proviso in place, then: if memory serves, when we left off considering this matter in comments 135, 136, 137, 138, 139, 140, 151, 160, 167, 169, 170, and 171 of the Eugene Goostman discussion, we had largely come to some sort of agreement that we would like to see the price of things better reflect their “true” cost, especially in environmental terms, we were considering how we might go about having that become the case, and you were proposing some sort of system of scientist-kings, rather than the traditional philosopher-kings, military-kings, or political-kings, with some sort of public participation, feedback, or oversight component, to solve the problem. And again, while I’m generally in favour of rule by scientific meritocracy, many aren’t, and even those who are often don’t agree on which particular scientists should be regally coronated, never mind all the problems with nouveau scientific results, as reported by the

Bulletinof the Institute of Mathematical Statistics,Nature, et al,along the lines I elucidated here at Shtetl-Optimized in our last discussion of matters D-Wave.Anyway, on that front I would like to highly recommend the

Why Doesn’t the Public Trust Scientists?lecture by Baroness Onora O’Neill of Bengarve, Principal of Newnham College at the University of Cambridge, which was published by University of California Television in 2005. I think she comes very close to setting up the best available working model of how we can, in practice and not just in principle, set up our systems to manage the sorts of problems we are considering, by using some combination of meritocratic scientific professionals, regulatory constraint, and citizen oversight, while still accounting for matters like not being able to replace trust with trustworthiness, post-hoc accountability, andquis custodiet ipsos custodes? It’s a really excellent lecture overall, I think, and it does provide some guidance in these matters.So, now, assuming we have indeed come to some sort of general agreement along these lines, and considering O’Neill’s provisos, what do we do to effect the results we have agreed? What do you recommend we try next?

Comment #5 June 16th, 2014 at 5:34 pm

Vitruvius #4:

what do we do to effect the results we have agreed? What do you recommend we try next?

Do you mean, what do we do to make the free, open scientific meritocracy that I described into a political reality, steering human civilization toward rationality? Hey, if you figure that one out, let me know. I’m assuming it can’t

hurtto advocate for it in the comments section of my blog, but maybe there’s something else that would be even more effective.Comment #6 June 16th, 2014 at 5:53 pm

It may not be much, then, Scott, given that we differ quite a lot in our skepticism about catastrophic predictions of the future, and in our personally optimistic

v.pessimistic natures, yet for us to both be largely advocating for the same sort of measures in practice, if not always in principle, can’t be all bad. Perhaps all that is left for us now is to wish our species zʼál zyyan myt mʼazl; oh, and I hereby offer to buy the next round ;-)Comment #7 June 16th, 2014 at 5:58 pm

I have never been all that bothered by Bell’s theorem and entanglement.

The correlation between Alice’s and Bob’s detectors is a quantum observable in and of itself. This observable is not collapsed into a definite state until the observations are brought together, by say Charlie (basically Alice and Bod are Schrodinger’s cats from Charlie’s perspective). At which point Charlie then knows both the records of the angles of the detectors and the detected events. Thus from Charlie’s perspective the whole system is in a superposition of all the possible choices Alice and Bob could make, until the results are revealed.

So while it may seem that Alice and Bob are not in causal contact during the experiment, it is certainly true that at both the start and the end of the experiment all parts of the experiment must be in causal contact. At first to prepare the initial states, and at the end to resolve the final states.

Then again maybe I am hopeless naive about these matters.

Comment #8 June 16th, 2014 at 6:06 pm

Good grief I just realized the logical conclusion of what I just wrote:

That human free will is a chemical means to amplify quantum indeterminacy. Furthermore, all human decisions are little more than a sequence of coin flips, albeit a highly correlated sequence.

Comment #9 June 16th, 2014 at 6:27 pm

Vitruvius #6: Well then, I hereby offer to

drinkthe next round! If you’re ever in Boston, email me and we’ll meet for drinks. And once we’re sufficiently buzzed, we can plot together how to bring the rest of the world over to enlightened scientific meritocracy. Ifyou and Iagreed with each other, then convincing the remaining 7 billion people couldn’t possibly be that hard…Comment #10 June 16th, 2014 at 6:47 pm

Aaron #7: Yes, your understanding of Bell’s Theorem is decent. For decades, many physicists

diddismiss it as basically a curiosity that shouldn’t bother you at all if you understand quantum mechanics, while others held it to be philosophically important (if still practically useless). What’s new in the last decade, and only emerged from quantum information, is thatBell inequality violation actually has practical applications(!!)—in particular, to guaranteed randomness generation, and to blind and authenticated quantum computing. So, that probably makes it harder to dismiss as “just a mathematical curiosity.”Comment #11 June 16th, 2014 at 7:23 pm

One man’s superposition is another man’s eigen-vector.

Comment #12 June 16th, 2014 at 9:15 pm

And one man’s eigen-vector is another’s eigenvector.

Comment #13 June 16th, 2014 at 10:13 pm

What I find interesting about the “practical” uses of entanglement is that their value is directly tied to proof of lack of entanglement. You can make a perfectly good quantum random number generator without entanglement or bell violations using, say, a single photon source and a beam splitter. Many people have done this, and they are even available as commercial products. However, this only works if you know for sure the size of your Hilbert space. Entanglement gives you a way to prove that there are no auxiliary degrees of freedom entangled to your random variable. It is the same with the blind computation and quantum cryptography — the no-cloning theorem tells you that you can’t make a copy of the state, but measuring entanglement tells you that there wasn’t a copy of the state from the beginning.

Comment #14 June 16th, 2014 at 11:31 pm

Huh. That result of Coudron and Yuen’ is extremely surprising. I’m deeply surprised by that laundering trick. The fact that you can get arbitrarily many random bits out is really surprising. Is there some rough intuition for why this works?

Also, a question lacking in precision Do the results in question in any way suggest that in some sense if I have four isolated quantum computers that have faulty hardware (in some suitable sense), and I control the communication between them, then I can simulate ZPP and BPP with minimal overhead and a small number of random bits? Is there some way to make this precise?

(Also, there’s a typo in the column firrst should be first.)

Comment #15 June 17th, 2014 at 2:50 am

Doesn’t this critique apply to any particular hardware implementation of the Einstein-certified random number generator algorithm as well?

Also, is it correct to think of these protocols more as randomness amplifiers than generators?

Comment #16 June 17th, 2014 at 2:58 am

Evan #13: What you say is partly right but partly wrong. With commercial quantum RNGs, the issue is not (just) the size of Hilbert space, or the possibility of entanglement with other degrees of freedom. Rather, it’s the possibility that the RNG was just completely miscalibrated—or even was set up by an adversary to spit out 0′s and 1′s in a deterministic pseudorandom sequence.

Also, in the randomness-expansion protocols discussed in the article, as in blind/authenticated quantum computing, the only way to

passthe verifier’s checks (with high probability) is using an entangled state. It’s true that verifying thepresenceof entanglement between the two provers, has the side effect of verifying of theabsenceof entanglement between the qubits you measured and anything else (because of monogamy of entanglement), and that can be important for cryptography.Quantum cryptography is different, since with the most popular protocols, you can do it successfully without any entanglement at all (just unentangled single photons). In

thatcase, entanglement only enters as a formal tool in the correctness proof, and as something that you’re trying to rule out.Comment #17 June 17th, 2014 at 3:12 am

Joshua: Regarding the Coudron-Yuen protocol, the intro to their paper is quite well-written. Please let me know after reading it if there’s something you want clarified further.

Personally, the idea that you could do

unlimitedrandomness expansion from a fixed random seed, and with a constant number of devices, was surprising and unintuitive to me too. However, after the idea was explained to me of simply “feeding the randomness back in”—recursively stretching n bits to 2^{n}, and 2^{n}bits to 2^{2^n}, etc., all the while taking care to make sure that the provers can’t detect that you’re using the randomness they themselves spit out to generate your questions—at some point my intuition flipped, to the point where I would’ve been more surprised if that turned outnotto be possible in some way.Regarding your second question, you should check out the seminal paper by Reichardt, Unger, and Vazirani! In some sense, RUV (building on prior work of Broadbent-Fitzsimons-Kashefi) does something even better than what you ask for. Namely, it lets a classical skeptic use the Bell inequality to force two untrustworthy quantum computers to carry out any computation in

BQPfor it, and be certain of the result (while meanwhile, the two QCs learn nothing about which quantum computation they performed, besides an upper bound on its length)!This is better than what you wanted both in giving BQP rather than ZPP/BPP, and in requiring only 2 provers rather than 4. On the other hand, the QCs have to be merely “untrustworthy,” rather than “faulty” in an arbitrary way. In other words, once the QCs realize they can’t get away with any crap, that you’ll catch them if they do,

thenthe QCs have to be able to work correctly!Comment #18 June 17th, 2014 at 9:32 am

Rahul #15:

Doesn’t this critique apply to any particular hardware implementation of the Einstein-certified random number generator algorithm as well?

No, absolutely not—that’s the entire point! With an Einstein-certified generator, if the hardware is faulty, the worst that can happen is that you’ll reject, and declare “this randomness failed the test.” Provided there’s no faster-than-light communication, and your small initial random seed was good, you’ll never accept numbers as random even though they’re not (or rather, you’ll do so only with astronomically-small probability). It’s not called “Einstein-certified” for nothing!

Also, is it correct to think of these protocols more as randomness amplifiers than generators?

I’d say it just comes down to semantics. Here’s an analogy: should we call an atomic bomb an “explosion amplifier” rather than an “explosion generator,” because you need a conventional explosive to trigger the nuclear chain reaction?

I’m guessing most people would say no, because the conventional explosive only plays the role of

triggeringa completely different and vastly more powerful process. Well, it’s the same thing with Einstein-certified randomness, if not more so. It’s true that you need a few initial random bits to “trigger” the randomness chain reaction, but once the reaction gets started, it can continue forever all on its own—and there’s no sense in which the vast number of new random bits you’re generating were “implicit” in the initial seed.Comment #19 June 17th, 2014 at 10:40 am

Hi Scott, great article!

How is the non-locality requirement of Bohmian interpretation:

“Bohmian mechanics is extremely strange, because it involves instantaneous communication between faraway particles”

any different from entanglement in non-Bohemian mechanics:

“If you measure an electron on Earth and find that it’s spinning left, then somehow, the counterpart electron in the Andromeda galaxy “knows” to be spinning left as well”.

Is it the case that Bohmian QM requires *all* particles to be communicating in such manner, including particles that are opposite side of the universe (i.e. outside of each other’s causality space-time cone), as opposed to only entangled particles in the case of non-Bohmian QM?

Comment #20 June 17th, 2014 at 10:57 am

fred #19: No, Bohmian mechanics only requires “nonlocal influences” between faraway particles if they’re entangled, not if they’re unentangled.

However, it’s crucial to understand that there’s no distance limit to entanglement! With or without Bohmian mechanics, you can certainly have entangled particles on opposite sides of the universe.

So then the only difference between the Bohmian and non-Bohmian cases, is in how you choose to

describethe situation. In ordinary QM, you’d say that the “reality” is fully encoded by the density matrix, and that nothing Alice does to her particle can change the density matrix of Bob’s particle (or vice versa)—so in that sense, there’s no faster-than-light communication. Of course, once youcondition on the result of Alice’s measurement, thatdoeschange Bob’s density matrix, but the same thing happens even with classical correlation. So the only difference is quantitative: Bell’s Theorem tells you that you can produce “larger” correlations (in some sense) using entanglement than you can produce without it, enabling you (for example) to win the CHSH game 85% of the time.In Bohmian mechanics, by contrast, you insist on ascribing “actual” states to the particles, in addition to their density matrices. And, because you’ve done that, you find that in order to stay consistent with Bell’s Theorem, the “actual” state of one particle has to send a faster-than-light message to the “actual” state of the other particle. You still can’t

exploitthis effect to send faster-than-light signals in practice, basically because you don’tknowthe “actual” states of the particles before you measure! (And this must be the case, since Bohmian mechanics is experimentally indistinguishable from standard QM, and we already said that standard QM doesn’t allow faster-than-light signalling.) The difference is conceptual: in Bohmian mechanics, but not in ordinary QM, you’d say that Alice’s choice of which operation to apply can, all by itself, nonlocally change “what’s really there” on Bob’s side of the universe (though, again, not in a way that Alice can actually exploit to signal to Bob).Comment #21 June 17th, 2014 at 1:20 pm

Although Bohmian mechanics is experimentally indistinguishable from standard QM, could it be possible to argue against it via a complexity theoretic argument? That is, could one concoct gnarly situations between Alice and Bob (and perhaps Charlie and Diane and…), where, in order to avoid violating the No-Superliminal-Signalling Rule, the universe has to — in some sense — perform incredible amounts of (computational) work to figure out how to collapse Bob’s state *just* the right way?

Comment #22 June 17th, 2014 at 1:30 pm

Henry #21: Good question! Yes, one

canargue against Bohmian mechanics on complexity-theoretic grounds, if one wishes to. See, in particular, my Quantum Computing and Hidden Variables paper, where I show that sampling the entire hidden-variable trajectories, in theories like Bohmian mechanics, would generically give you at least the power of SZK, which is neither known nor believed to be in BQP. In that sense, one could say that simulating hidden-variable trajectories is “even harder” than simulating measurement outcomes in ordinary QM—so that maybe hidden variables should be disfavored, if our goal is to minimize the total computational resources posited by our description of Nature.There are also various arguments one could make against that position, but I’ll leave those as exercises for the reader.

Comment #23 June 17th, 2014 at 2:04 pm

Scott #22,

Is there an experiment we could imagine to demonstrate that nature has SZK rather than BQP power or vice versa? In other words, does your demonstration show that Bohmian mechanics is another theory rather than another interpretation?

Comment #24 June 17th, 2014 at 2:15 pm

Jay #23: Unfortunately, no. The problem is that, in Bohmian mechanics, you don’t get to

seethe entire trajectory of the hidden variable: you only see where it is right now. (Were that not the case, Bohmian mechanics would indeed generate wildly-different empirical predictions than standard QM, and would probably be immediately ruled out on that ground.)At most, you could hypothetically do an experiment—involving quantum interference between different states of your own brain!—that had the following property:

if Bohmian mechanics were true, then your entire sequence of experiences while undergoing the experiment, were they laid out at once, would experimentally demonstrate Bohmian mechanics’ truth.The problem, again, is that according to the rules of QM (upheld by Bohmian mechanics), you

can’treach back in time to see your whole previous history, in the way that would be needed here. In other words, even if, by the time the interference experiment on your brain is over, your “life history” now encodes the truth or falsehood of Bohmian mechanics, the experiment has necessarily caused you toforgetthose aspects of your life history that are relevant to the question! (Nor can those aspects be recorded anywhere else.)Comment #25 June 17th, 2014 at 2:24 pm

Quantum “Randomness” is a somewhat misleading name.

Every day randomness is the result of statistical observations about processes involving huge complexity and sensitivity to initial conditions, and would still hold in a perfectly “billiard ball” deterministic universe.

But Quantum Randomness is really about things happening without a cause… which doesn’t make a lick of sense no matter how you slice it, no?

(the multiverse interpretation is getting rid of that difficulty by saying that if two events can happen, they both happen, so there’s no causality paradox?)

Could the nature of QM randomness be similar to coming up with something like Chaitin’s number? I.e. not truly random (there is only one true Chaitin’s constant), but rather uncomputable?

Comment #26 June 17th, 2014 at 2:35 pm

fred #25:

Could the nature of QM randomness be similar to coming up with something like Chaitin’s number? I.e. not truly random (there is only one true Chaitin’s constant), but rather uncomputable?

No, because that hypothesis would predict wrong results for the CHSH game (assuming we’re unwilling to assume faster-than-light communication)! Reread the article.

Comment #27 June 17th, 2014 at 4:21 pm

Scott,

Your guest column raised a question for me. What is the relationship (if any) between the conjecture of Maldacena and Susskind that any entanglement is related (via AdS/CFT) to some sort of ER bridge, and Wolfram’s entangled/connected cellular automata?

Comment #28 June 17th, 2014 at 4:53 pm

Mike #27: Wow, that’s a wild question! No relation, as far as I can see. Maldacena and Susskind’s ER=EPR idea assumes quantum mechanics from the outset, then tries to find a way to uphold QM even in the presence of exotic spacetime geometries, which mess with the definition of “locality.” That then leads them to a picture of “wormholes” between the Hawking radiation and the black-hole interior, which however could only be traversed by a very bizarre experiment (one that, by Harlow and Hayden’s argument, might require an exponential-time quantum computation for realistic black holes).

Wolfram, by contrast, is motivated by the desire to

rejectQM, and to replace it with a fundamentally-classical picture. In his case, the long-range threads arise, not for any subtle reason involving black holes, but simply because without them he couldn’t even explain the ordinary Bell/CHSH experiment. (As I say in the column, I don’t think Wolfram can explain Bell/CHSHeven withthe long-range threads! But that requires a further argument, involving special relativity and reference frames.)Comment #29 June 17th, 2014 at 4:58 pm

Is there a formal definition for where those “seed” bits come from? It appears that in the Coudron-Yuen protocol each “step” of generation creates results that are sorta “independent” from each other in a sense that with the first you couldn’t predict any other values, but that would seem to imply that

anyinput seed works, because otherwise they wouldn’t be independent. Is that true?Comment #30 June 17th, 2014 at 5:02 pm

Dani #29: I didn’t completely understand your question, but in the Coudron-Yuen protocol, the initial input seed is just assumed to be a uniformly random string (and the seeds for the later steps are generated in a way that’s completely specified by their protocol). If the input seed is

notuniformly random, but is “dirty” (i.e., it has some entropy but also some predictability), the results of Miller-Shi and Chung-Shi-Wu let you do Einstein-certified randomness expansion even then.On the other hand, it’s not hard to show that it’s impossible to get Einstein-certified randomness if you have

zerorandomness to start with (or indeed, zero randomness that isn’t known to the provers). This implies, in particular, that for everyfixedvalue of your initial seed, if the provers know that value then they can cheat, and avoid generating randomness.Comment #31 June 17th, 2014 at 5:40 pm

Scott #28

But I thought that a classical computer can simulate any QM system (efficiently or not is another matter).

And all classical computers are by definition deterministic.

And a classical computer can be implemented as a cellular automata (I thought cellular automata are turing complete).

So what’s missing here is a pure source of randomness required to simulate the collapse of the amplitude and give a complete picture of reality?

And Wolfram thought that he could simulate that with those long distance threads while staying deterministic?

But:

1) what about augmenting cellular automata with pure sources of randomness? (we lose determinism) Is that enough to recreate QM? (without a need for long distance threads and whatnot)

2) what if we’re only interested in the multiverse interpretation?

Comment #32 June 17th, 2014 at 5:47 pm

Scott #30: I see, thanks. So essentially the Einstein-certified randomness allows you to “correct” for some non-random bias in the input, but you still need at least a little randomness to start out with. That intuitively makes sense too: I was worried about that case where the input is exactly known (no randomness), because in that case these results didn’t really make sense.

Basically the idea is that

ifyou don’t know everything about the input data you’re giving in, the “randomness generator” can exploit that by “expanding” those things you don’t know until you are given good random data back that you yourself couldn’t predict.In that case, is it possible to create such a system that, when seen as a black box, can take a request for “random n bits,” and output n perfectly random bits? In other words, perfectly random in the sense that no other users of the machine, given previous results, could predict your n random bits? It appears that the point of your article was that Bell’s Inequality

guaranteesthat such a thing is possible because of the lack of determinism in nature (at least from the prover’s perspective), but I haven’t seen any protocols that do just that, especially in a way that the caller canproveit’s “perfectly random”.Comment #33 June 17th, 2014 at 6:13 pm

Most importantly, how would you “set up” such a system in a way that causes it to also be “perfectly random” to the (potentially untrustworthy) organizers? Or at least, is there a way to check whether or not such a system meets the “perfectly random” requirement with high probability?

Comment #34 June 17th, 2014 at 6:50 pm

Dani #32, #33: For some reasonable interpretation of what you’re asking for, that’s

exactlywhat the Bell-certified randomness expansion protocols (Vazirani-Vidick, Coudron-Yuen, Shi et al. etc.) give you, and the entire point of my article was to explain how they give it. Could you please explain how what you want differs from what those protocols give?Comment #35 June 17th, 2014 at 6:55 pm

fred #31:

So what’s missing here is a pure source of randomness required to simulate the collapse of the amplitude and give a complete picture of reality?

No, that’s not the issue at all: the issue is locality.

Of course you can simulate quantum mechanics deterministically, in the sense that BQP⊆EXP. But Bell’s Theorem assures you that your simulation will be wildly

nonlocal. That is, you can’t run the simulation by having a deterministic computer at Alice’s end representing Alice’s state, and a different deterministic computer at Bob’s end representing Bob’s state, and no communication between the two computers. If you want the simulation to belocal, then it has to involve fundamental indeterminism: that’s Bell’s Theorem.Comment #36 June 18th, 2014 at 12:59 am

Has a working, physical device based on the Einstein certified random numbers principle been constructed? Are there any hurdles in that aspect?

Do any of the so called quantum random no. generators commercially sold incorporate these algorithms?

Comment #37 June 18th, 2014 at 6:22 am

Scott, accurate description of “foundational” thought experiments usually demands Hilbert spaces with not very big dimensions. Why do you think there is an inevitable problem for approximation of Bell setup on “traditional” cellular automata? You only need to include Alice, Bob (and possible few other auxiliary variables) as well (cf. rather standard ideas mentioned in #7).

Comment #38 June 18th, 2014 at 8:09 am

Rahul #36: Yes, as I said in the article, Pironio et al. already did a proof-of-concept demonstration for Einstein-certified randomness. And I understand that NIST is currently working on implementing something with a more reasonable bit-rate (which is the main hurdle). Unlike with (say) quantum computing, I don’t think there’s any

dauntingtechnical obstacle in the way of making this practical, or even commercial. On the other hand, it’s unclear to me how many real-world customers are paranoid enough to want this!No, the quantum random number generators that are currently sold commercially don’t incorporate any of these ideas. At best, there’s an indirect connection: once you’ve used the CHSH game to establish the conceptual point that quantum measurement outcomes

really, trulyhave to be random (based on the causal structure of spacetime), that might increase your certainty that measuring a single qubit in the {0,1} basis (like the current commercial QRNGs do) also produces true randomness. But of course, it doesn’t address any worries about calibration problems with the QRNG, or the possibility that a malicious person tampered with it.Comment #39 June 18th, 2014 at 8:11 am

Alexander #37: I don’t understand what you’re trying to say. The problem with “traditional” cellular automata reproducing Bell inequality violation comes about because those CAs are classical and local. It has nothing to do with the dimensions of Hilbert spaces or anything else.

Comment #40 June 18th, 2014 at 8:16 am

Scott #39, (1) if classical grid computer modeling of some quantum process impossible? That is a principle difference of such system and cellular automata? (2) How locality may be violated if correlation may be found only by final measurement by Charlie (it is Deutsch, Hayden, Tipler, Everett, et al idea)

Comment #41 June 18th, 2014 at 9:28 am

Alexander #40:

(1) As I said in comment #35, obviously you can “model” a quantum process on a classical computer, but Bell’s Theorem tells you that the simulation will have to be spatially nonlocal (in the sense that it can’t have separate, non-communicating components for Alice’s part and for Bob’s part).

(2) (Classical) locality can be violated because, when Charlie

doeslook at Alice and Bob’s results from playing the CHSH game, he can see after the fact that they won the game 85% of the time, whereas in a local classical universe, they could only possibly have won at most 75% of the time. For details, reread the section of my article about the CHSH game.Comment #42 June 18th, 2014 at 10:11 am

Scott #41, Bell theorem may be applied only if you are satisfying all conditions of the theorem. I have talked about setup suggested in Bell experiment and possibility of reproducing result by classical cellular automata. Bell theorem is suggesting that after each measurement Alice and Bob have a certain result. Such suggestion usually encoded in word “realistic” in a formulation of the Bell theorem. If your classical computer honestly simulating Everett idea, I could hardly imagine how you can catch that for such a simple experiment.

Comment #43 June 18th, 2014 at 10:23 am

It is interesting because the increase of the entropy in a general quantum system is given by a measure (interaction) on a quantum entangled state demonstrate by Bell’s theorem; but is the system ( with the emitted random photons, and the measured quantum entanglement particles) entropy invariant? So that the measure of the quantum noise of the black box can give information of the transmitted random string?

Comment #44 June 18th, 2014 at 10:40 am

Alexander #42: Yes, that’s true, it

isan assumption of Bell’s Theorem that experiments have results. If you want to drop that assumption, forget about simulating experimental results, and instead simulate the evolution of the entire Everettian wavefunction, then you’llstillhave a problem doing that with a classical, spatially-local cellular automaton—but in this case, for even more basic reasons (i.e., where in your local, classical CA are you going to store the 2^{n}complex amplitudes? and whatever the answer, how could you possibly call the result “spatially local”?)domenico #43: I don’t quite understand your question, but in the usual picture, making a measurement of a quantum state

isa process that increases entropy (unless the state happens to be an eigenstate of the measurement, which in the case of the CHSH game, it’s not).Comment #45 June 18th, 2014 at 10:53 am

Scott,

You mentioned that you were at first apprehensive of the result that randomness can be expanded. But I would like to take that statement and turn it on its head.

Let’s assume that it is impossible to expand randomness. That would have some pretty phenomenal implications with regard to the second law of thermodynamics. Namely that the amount of randomness in a system can never increase (sorry for the vagaries).

Now I realize that a strong hot cup of tea is not exactly an entangled state generator; nevertheless thanks to theorems like the Poincare recurrence theorem we know that there is no fundamental way to generate randomness classically (although the volume of the phase space covered by orbits allows for effective randomness).

Thus one can infer that in some way the second law of thermodynamics is inherently quantum in origin; notwithstanding the confusion between unitary (invertible, length, angle preserving) and isometric (length, angle preserving) operators in infinite dimensional spaces.

Comment #46 June 18th, 2014 at 12:43 pm

Scott #44, I am absolutely agree with problems you are mentioning (in round brackets), but I just wrote in the very first comment that for foundational experiment the n is small enough (5-6 or so)

Comment #47 June 18th, 2014 at 2:25 pm

Scott I know you are rational. Would superdeterminism convert you to a theist? If it were true would you believe an absolute system of astrology exists? Thank you for your patience with the question.

Comment #48 June 18th, 2014 at 2:29 pm

J #47:

Scott I know you are rational.

Yes.

Would superdeterminism convert you to a theist? If it were true would you believe an absolute system of astrology exists?

If superdeterminism were true, then yes, I’d be extremely open to theism, astrology, and more. Why not? After all, not even astrologers or religious fundamentalists have postulated cosmic conspiracies quite as all-encompassing as the superdeterminists have.

Comment #49 June 18th, 2014 at 4:25 pm

Scott as a young boy sits on a sofa with his mother in an old-fashioned, cluttered doctor’s office. The doctor stands near the sofa, holding a cigarette and listening.

MOTHER (To the doctor)

“He’s been depressed. All off a sudden, he can’t do anything.”

DOCTOR (Nodding)

“Why are you depressed, Scott?”

MOTHER (Nudging Scott)

“Tell Dr. Flicker.” (Young Scott sits, his head down. His mother answers for him)

“It’s something he read.”

DOCTOR (Puffing on his cigarette and nodding)

“Something he read, huh?”

SCOTT (His head still down)

“The universe is expanding.”

DOCTOR

“The universe is expanding?”

SCOTT (Looking up at the doctor)

“Well, the universe is everything, and if it’s expanding, someday it will break apart and that would be the end of everything!”

Disgusted, his mother looks at him.

MOTHER (shouting)

“What is that your business? ”

(she turns back to the doctor)

“He stopped doing his homework!”

SCOTT

“What’s the point?”

MOTHER (Excited, gesturing with her hands)

“What has the universe got to do with it? You’re here in Brooklyn! Brooklyn is not expanding!”

DOCTOR (Heartily, looking down at Scott)

“It won’t be expanding for billions of years yet, Scott.

And we’ve gotta try to enjoy ourselves while we’re here. Uh?” (laughs)

SCOTT (quiet, shakes his head)

Comment #50 June 18th, 2014 at 6:01 pm

Aaron Sheldon #7 (yeah, a bit late )

The causal contact loophole has been removed by ingenious experiments whereby the experimental apparatus is randomly switched while photons are inflight from the emitters and spacelike separated from the detectors.

On Obama’s speech, a weak point is where he asks if you would rather listen to 97 scientists or just 3 – er, well I’d like to know about their backgrounds before deciding – science isn’t done by a popularity vote. I’m sure you also had about 97/100 doctors suggesting blood-letting during the great plague too. And any fool knows Obama wouldn’t be able to give such a speech if shale gas reserves hadn’t been discovered. If they spent as much on theoretical and experimental physics research as the Kennedy inspired space program (or just the Iraq war) we’d almost certainly have quantum computers (if they are allowed by nature) and have advanced particle physics to a point where real-world engineering benefits in energy sources and materials would enable mankind to cope with adverse climate change, random or otherwise.

I mean, assuming quantum computing is scalable ( ) – once we get the almost

exactmodel of microscopic particles – we could probably discover how to do economically beneficial fusion with QC simulationsComment #51 June 18th, 2014 at 9:42 pm

James #50: The argument “who cares if 97/100 doctors recommend a particular treatment? the same number probably would’ve recommended bloodletting during a great plague” is analogous to the following:

I’m trying to sell you a few acres of land, far from any city center. My asking price is a billion dollars. You object, “that’s a completely ridiculous price! no nearby piece of land is selling for anything like that!

no one elsethinks it’s worth as much!”You confidently reply, “Yeah, well, that’s what everyone said about Manhattan Island in the early 1600s. But

everyone was wrong. The land in Manhattan reallywasworth an astronomical amount.”The fallacy should be clear: when making your case, you don’t get to “shift the market price by an act of imagination”—asserting that, since it’s

possiblethat the consensus will change in the future, you therefore get to act as if the consensus were whatever you’d like it to be right now. At best, you can invest the hard work to try tochangethe consensus, by convincing everyone else of the value of your property, the efficacy of your cure, or the unreality of human-caused climate change.Comment #52 June 18th, 2014 at 10:41 pm

If you are in a state of sin for considering arithmetical methods of generating random digits then are you actively evil if you use QM to exponentially expand randomness? Have they opened a demonic portal?

John von Neumann save us.

Comment #53 June 18th, 2014 at 11:43 pm

Scott#48:

Thank you. I DO NOT have a strong mind as yours. I try to be rational and am highly educated in US but since I am from third world I have my insecurities. My father saw my astrology when I was a child and this has locked my mind for 20 years now and his words have influenced me. He is predicting scary things now. What would you advice me? Can I break free? Or am I doomed to the prophecies?

Comment #54 June 18th, 2014 at 11:48 pm

more like 30 years.

Comment #55 June 19th, 2014 at 2:30 am

Alice and Bob stay in a black box, and there is a random string that is obtained from the black box, and Charlie observe the transparent black box (a sphere of photomultipliers); if the measures of Charlie permit to increase the correlation between Alice and Bob of an infinitesimal quantity, then it is possible to correct all the quantum noise (if there is a Bell’s theorem correction for an infinitesimal, then…). If this is true, then it is impossible to obtain a true random generator in the physical world, because there are ever a physical laws that describe a system (it could be like a Godel’s incompleteness theorem).

Comment #56 June 19th, 2014 at 2:35 pm

Once you’ve got randomness expansion & laundering, it seems to me that the obvious extension is to seed it with only one bit.

In the same way that sigma-delta ADCs are more comprehensible using multi-bit DACs, but they’re most powerful when you take the generalized theorem and extrapolate it to down the 1-bit case.

Comment #57 June 20th, 2014 at 3:40 am

Dear Scott, I am a long-time reader of your blog, but this is the first time that I am leaving a comment.

I came across this article today: http://rjlipton.wordpress.com/2013/04/27/sex-lies-and-quantum-computers/. I am wondering if you have read it; and if you have, what you think about it. I understand that you are a busy man and might not have time to write a response to it, but I am sure that many of your readers (including me) would be very grateful if you do.

I became interested in quantum computing because of you; but, alas, my understanding of it still remains at the level of a layman’s. Therefore, I do not have the expertise needed to evaluate Mr Lipton’s essay. Since I consider you to be a trustworthy expert, it would be wonderful if you would take the time to share your thoughts about it — even if just briefly.

Keep up the fantastic writing — your blog never fails to edify and entertain me.

Warmest regards,

M.

Comment #58 June 20th, 2014 at 11:05 am

M. #57: Lipton is

totally mistakenthat, in the quantum black-box model, the quantum machine gets to see the individual wires and gates (and therefore the comparison to classical is unfair). The only assumption made is that you get to query the black box in superposition, and the comparison iscompletelyfair. I see that a commenter on his blog already pointed this out.Apart from that mistake, everything he says looks correct—and not only that, but

I’ve been making many of the same points over and over on this blog!(Look at the tagline below the masthead.) But, as often the case, the way he says it annoys me greatly. He writes:Thus from a mathematical view point it is clear that there is no proof that quantum is better than classical. None. Zero.

As Aram Harrow pointed out in the comments, this is true but almost comically beside the point. One could equally well say: “From a mathematical viewpoint there is no proof that NEXP doesn’t have log-depth circuits. None. Zero. So maybe all cryptographers have been wasting their time for the past 50 years.”

To his credit, Lipton is at least consistent: he

actually believesthat maybe P=NP=PSPACE, which helps to put his belief that maybe P=BQP in context. But it’s important for other people to understand that this is like a physicist telling everyone:“Hey, the maybe the conservation of energy will be discovered to be violated tomorrow! Maybe relativity and quantum mechanics will be overturned! Maybe 20-legged cows will descend on us from space! After all, there’s no mathematical proof that any of these things won’t happen! See how

open-mindedI am?”Again and again, Lipton seems to me to conflate the fact that

no one can disprovesomething, with there being any positive reason to take the thing seriously. The hardness of simulating quantum mechanics on a classical computer hasn’t been proved—anyone who knows anything knows that—but it’s a statement that’s withstood decades of attempts to falsify it (indeed, from long before the statement was precisely formulated), to the extent that it would be accepted as a “law” in any empirical science.Comment #59 June 22nd, 2014 at 5:42 pm

Hi Scott

Thanks for the article, I really enjoyed it (although I naively had half expected / hoped that you were going to give a *practical* method for producing random numbers).

Two quick questions:

1) If you need seed random numbers for the generator to work, what is the benefit of the generator? If you have some other method fro producing true random numbers then why not just use that (I guess it could be slow, but so is your method for now, who says the expander will be quicker than whatever method you were using to generate the seed bits)? If you do not have another method then how will the expander get its seed bits to work at all?

2) You seem pretty intent on ensuring that the method is fool proof against any mechanical defects, or even ‘nature cheating’. If so, how do you propose ensuring that Alice and Bob are entirely unable to cheat? the “referee” is sending both Alice and Bob the “cards” in order to play the game. As such surely both Alice and Bob are within causal contact of the referee and there could potentially be some sort of system error providing Alice and Bob with additional information allowing them to cheat. Even if the system is not flawed, perhaps nature is cheating by ensuring that the information is being provided to allow cheating to happen?

Comment #60 June 22nd, 2014 at 9:02 pm

Yoni #59:

0) I

diddiscuss a practical method for random number generation! As I said in the article, the Einstein-certified randomness protocols (a la Colbeck-Renner and Vazirani-Vidick) have already been demonstrated on a small scale and could be made practical with existing technology, if anyone wanted it enough.1) The benefit is enormous

expansionin the number of random bits you have. Pseudorandom number generators, which are used in all of our computers, operate on the same principle: you need to “seed” them with, for example, the current system time or some “random” keystrokes, but once you do, they’re able to generate a vastly larger number of bits that are indistinguishable from random bits by most ordinary programs. With Einstein-certified randomness, the main difference is that the huge number of outputs areactuallyrandom, rather than just pseudorandom. But the need for a seed isn’t new.2) No, the whole point of the protocol is that

ifAlice and Bob win the game, andifthe initial seed was fine, andifthere was no faster-than-light communication,thenthe output bits must be truly random. You prove a theorem to that effect. To do so, in particular you have to rule out any possible cheating strategies of Alice and Bob.Comment #61 June 24th, 2014 at 4:58 am

Scott

0) I think it comes down to different expectations for the word “practical”. When I see it I think “here is a practical experiment you can do with a laptop, two potatoes and a stainless steel fork”. When you say it you mean “this isn’t only theoretical, but can actually be practically implemented given two supercomputers a dozen scientists and a million dollars of lab equipment”. I know your meaning is more accurate, but that didn’t stop me from expecting something that I could “try at home”.

1) My point was – if the seed number is not “Einstein certified” random, then presumably neither is the output. is this correct? If so, then what is the point of going to so much effort to ensure that the system can generate such strongly certified randomness? By contrast in the case of pseudorandom number generators, the point is not to generate strongly random numbers but to generate numbers that can be *treated* as random for a specific purpose and to do it very fast. In fact I use pseudorandom number generators for this purpose the whole time and actually usually fix the seed as randomness itself is not particularly useful for what I am trying to do (Monte Carlo simulations). But you wouldn’t use that method for something where randomness was *really* important.

2) I still don’t understand how that works. If there was enough time for the information about the cards drawn to have reached both Alice and Bob, then why is faster than light communication needed to cheat. As an example, if you and I played the game, but we both “knew” the referee and got him to stick little post-it-notes on each of our cards stating the colour of the other one’s card then we could easily win the game without faster-than-light communication and without quantum physics at all – no matter how random the initial seed was.

Comment #62 June 24th, 2014 at 8:20 pm

Yoni #61:

1) A very relevant point here is that, by the Miller-Shi and Chung-Shi-Wu results, the initial seed doesn’t need to be perfectly random—it’s enough for it to have

someentropy. And while generating “pure” randomness might be hard (at least without additional assumptions, like multiple independent sources), there are many ways to generate a sequence of bits with some entropy: for example, just make one up typing randomly on your keyboard.2) Then goddammit, just force Alice and Bob to return their answers before there was time for a signal to get from one to the other (only time for a signal to get from them to the referee, who’s in between them). This has been done in Bell experiments since Aspect’s in the 1980s.

Comment #63 June 25th, 2014 at 4:02 pm

Scott #58

“Again and again, Lipton seems to me to conflate the fact that no one can disprove something, with there being any positive reason to take the thing seriously.”

But, similarly, one could say that a thing seeming unlikely cannot be conflated with a proof of impossibility.

You’ve come up with tons of explanations (electrical fences), analogies (colored frogs), historical precedents to back up your gut feeling that P!=NP … but no amount of this will ever serve as a mathematical proof.

It’s all he’s saying, there is *no* mathematical proof (I thought you guys were in the business of proving).

You may find that comical and besides the point, but one could say that what’s really comical is the incapability of the complexity theory community to come up with a proof for something that’s supposedly so obvious?

I know your argument is “well, it’s hard to prove there isn’t a way to do something”, but your very argument about an electrical fence seems to suggest that there’s something very hard and palpable there, just we don’t have the tools to describe it?

Comment #64 June 25th, 2014 at 5:42 pm

fred #63: I don’t think you’ve engaged yet with my main point. Namely, the people who point out “but there’s no

proof!,” tend to be extremely selective in the things they point out there’s no proof of, and the selectivity is what belies their claim of open-mindedness. So for example, it’s true that there’s no proof (yet) that P≠NP, but there’s also no proof that alien abductions aren’t occurring. So, do you also think alien abductions are a serious possibility? If not, then can you give a reason for taking P=NP seriously, that doesn’talsoapply to alien abductions? My claim is that, if you studied complexity theory enough (and avoided catching whatever weird disease Lipton caught ), P=NP would come to seem to you as extremely similar to alien abductions: something you can’t formally rule out but that there’s no good positive reason to take seriously.Comment #65 June 26th, 2014 at 10:33 am

Scott #64

Hehe, you’re making it very difficult by comparing a mathematical statement with a very vague and ambiguous statement about some circumstantial event!

I could prove that there is no alien abduction happening by fitting every single human with a Google Glass, and have farms of computers review the footage in real-time for any hint of abduction

I would hope that there’s something deeper about the difficulty to prove P!=NP than there is about the difficulty to prove that there is no alien abductions going on.

So, what’s interesting to me is, where do we go from here?

1) assume P!=NP and move on? I.e. there are enough interesting things built on top of P!=NP that can be proved or have interesting consequences? That seems to be the case (complexity zoo, boson sampling, etc)

2) how useful is it to try and come up with a proof for P!=NP? Or trying to understand why we fail to prove it. I.e. are we missing some special tools and techniques? Or look at it from new perspectives (and we need some sort of breakthrough like Godel theorem).

I’m not a complexity theorist, so maybe there is no hard distinction for researchers between 1) and 2).

(I would think that the current trend of physics to tie things to complexity theory could put some pressure to examine 2) more closely).

Anyway, P!=NP is often touted as the fundamental question of Computer Science, and I think that many ppl who come to this to satisfy some curiosity are quite surprised by the types of discussions among the experts (the same applies to the feasibility of QC). It’s way more “meta” than one expects given the mathematical nature of the question.

Comment #66 June 30th, 2014 at 7:55 am

I have a question about the randomness-magnifying scheme, and I don’t know enough QM or QC in order to read the original papers. I’ll be glad if you can clarify a detail of the algorithms.

You said that if the original sequence of bits only has limited entropy, the process still works. By how much does each iteration enlarge the original entropy? That is, for what function

f, the process takes ann-bit string distributed withHbits of entropy and returns a string of2^nbits withf(H)bits of entropy? And a similar question for the laundring process.Comment #67 July 17th, 2014 at 2:14 pm

Could Wiener measure on subsets of all continuous paths between two points explain the “randomness” in QM? I mean the path continuity and isotropy enable the assignment of finite measure to subsets to be calculated as probabilities.