The real issue is more about how determinism is not observable even because of the need for the extrapolation to infinity.

]]>You said that if the original sequence of bits only has limited entropy, the process still works. By how much does each iteration enlarge the original entropy? That is, for what function *f*, the process takes an *n*-bit string distributed with *H* bits of entropy and returns a string of *2^n* bits with *f(H)* bits of entropy? And a similar question for the laundring process.

Hehe, you’re making it very difficult by comparing a mathematical statement with a very vague and ambiguous statement about some circumstantial event!

I could prove that there is no alien abduction happening by fitting every single human with a Google Glass, and have farms of computers review the footage in real-time for any hint of abduction ðŸ˜›

I would hope that there’s something deeper about the difficulty to prove P!=NP than there is about the difficulty to prove that there is no alien abductions going on.

So, what’s interesting to me is, where do we go from here?

1) assume P!=NP and move on? I.e. there are enough interesting things built on top of P!=NP that can be proved or have interesting consequences? That seems to be the case (complexity zoo, boson sampling, etc)

2) how useful is it to try and come up with a proof for P!=NP? Or trying to understand why we fail to prove it. I.e. are we missing some special tools and techniques? Or look at it from new perspectives (and we need some sort of breakthrough like Godel theorem).

I’m not a complexity theorist, so maybe there is no hard distinction for researchers between 1) and 2).

(I would think that the current trend of physics to tie things to complexity theory could put some pressure to examine 2) more closely).

Anyway, P!=NP is often touted as the fundamental question of Computer Science, and I think that many ppl who come to this to satisfy some curiosity are quite surprised by the types of discussions among the experts (the same applies to the feasibility of QC). It’s way more “meta” than one expects given the mathematical nature of the question.

]]>“Again and again, Lipton seems to me to conflate the fact that no one can disprove something, with there being any positive reason to take the thing seriously.”

But, similarly, one could say that a thing seeming unlikely cannot be conflated with a proof of impossibility.

You’ve come up with tons of explanations (electrical fences), analogies (colored frogs), historical precedents to back up your gut feeling that P!=NP … but no amount of this will ever serve as a mathematical proof.

It’s all he’s saying, there is *no* mathematical proof (I thought you guys were in the business of proving).

You may find that comical and besides the point, but one could say that what’s really comical is the incapability of the complexity theory community to come up with a proof for something that’s supposedly so obvious?

I know your argument is “well, it’s hard to prove there isn’t a way to do something”, but your very argument about an electrical fence seems to suggest that there’s something very hard and palpable there, just we don’t have the tools to describe it?

1) A very relevant point here is that, by the Miller-Shi and Chung-Shi-Wu results, the initial seed doesn’t need to be perfectly random—it’s enough for it to have *some* entropy. And while generating “pure” randomness might be hard (at least without additional assumptions, like multiple independent sources), there are many ways to generate a sequence of bits with some entropy: for example, just make one up typing randomly on your keyboard.

2) Then goddammit, just force Alice and Bob to return their answers before there was time for a signal to get from one to the other (only time for a signal to get from them to the referee, who’s in between them). This has been done in Bell experiments since Aspect’s in the 1980s.

]]>0) I think it comes down to different expectations for the word “practical”. When I see it I think “here is a practical experiment you can do with a laptop, two potatoes and a stainless steel fork”. When you say it you mean “this isn’t only theoretical, but can actually be practically implemented given two supercomputers a dozen scientists and a million dollars of lab equipment”. I know your meaning is more accurate, but that didn’t stop me from expecting something that I could “try at home”.

1) My point was – if the seed number is not “Einstein certified” random, then presumably neither is the output. is this correct? If so, then what is the point of going to so much effort to ensure that the system can generate such strongly certified randomness? By contrast in the case of pseudorandom number generators, the point is not to generate strongly random numbers but to generate numbers that can be *treated* as random for a specific purpose and to do it very fast. In fact I use pseudorandom number generators for this purpose the whole time and actually usually fix the seed as randomness itself is not particularly useful for what I am trying to do (Monte Carlo simulations). But you wouldn’t use that method for something where randomness was *really* important.

2) I still don’t understand how that works. If there was enough time for the information about the cards drawn to have reached both Alice and Bob, then why is faster than light communication needed to cheat. As an example, if you and I played the game, but we both “knew” the referee and got him to stick little post-it-notes on each of our cards stating the colour of the other one’s card then we could easily win the game without faster-than-light communication and without quantum physics at all – no matter how random the initial seed was.

]]>0) I **did** discuss a practical method for random number generation! As I said in the article, the Einstein-certified randomness protocols (a la Colbeck-Renner and Vazirani-Vidick) have already been demonstrated on a small scale and could be made practical with existing technology, if anyone wanted it enough.

1) The benefit is enormous **expansion** in the number of random bits you have. Pseudorandom number generators, which are used in all of our computers, operate on the same principle: you need to “seed” them with, for example, the current system time or some “random” keystrokes, but once you do, they’re able to generate a vastly larger number of bits that are indistinguishable from random bits by most ordinary programs. With Einstein-certified randomness, the main difference is that the huge number of outputs are *actually* random, rather than just pseudorandom. But the need for a seed isn’t new.

2) No, the whole point of the protocol is that *if* Alice and Bob win the game, and *if* the initial seed was fine, and *if* there was no faster-than-light communication, *then* the output bits must be truly random. You prove a theorem to that effect. To do so, in particular you have to rule out any possible cheating strategies of Alice and Bob.

Thanks for the article, I really enjoyed it (although I naively had half expected / hoped that you were going to give a *practical* method for producing random numbers).

Two quick questions:

1) If you need seed random numbers for the generator to work, what is the benefit of the generator? If you have some other method fro producing true random numbers then why not just use that (I guess it could be slow, but so is your method for now, who says the expander will be quicker than whatever method you were using to generate the seed bits)? If you do not have another method then how will the expander get its seed bits to work at all?

2) You seem pretty intent on ensuring that the method is fool proof against any mechanical defects, or even ‘nature cheating’. If so, how do you propose ensuring that Alice and Bob are entirely unable to cheat? the “referee” is sending both Alice and Bob the “cards” in order to play the game. As such surely both Alice and Bob are within causal contact of the referee and there could potentially be some sort of system error providing Alice and Bob with additional information allowing them to cheat. Even if the system is not flawed, perhaps nature is cheating by ensuring that the information is being provided to allow cheating to happen?

]]>