An anonymous commenter asked for my opinion of The Free Will Theorem, a much-discussed recent paper by John Conway and Simon Kochen. I’ve been putting it off, but I’ll finally will myself to say something.
I read The Free Will Theorem mostly as an amusing romp through the well-travelled philosophical terrain of quantum mechanics, relativity, and entanglement. I’ve always enjoyed Conway’s writing style, so it was a treat to see his usual jokes and puns out in full force.
Of course, the reason the paper has attracted attention is the Free Will Theorem itself, which I’ll paraphrase as follows:
Suppose that (1) the laws of physics allow something like a Bell or GHZ experiment, (2) the people doing the experiment can set their detectors any way they want (i.e., in a way not determined by the previous history of the universe), and (3) something like Lorentz invariance holds (i.e. there’s one reference frame where experimenter A measures first, and another where experimenter B measures first). Then the results of the experiment are also not determined by the previous history of the universe.
Or as the authors colorfully put it: “if indeed there exist any experimenters with a modicum of free will, then elementary particles must have their own share of this valuable commodity.”
(Note that by “free will,” all Conway and Kochen mean is the property of not being determined by the previous history of the universe. So even events with known probability distributions, like coin flips and quantum measurements, can have “free will” according to their definition.)
My reaction to the Free Will Theorem is threefold:
- It’s a very important, even if mathematically trivial, consequence of the Bell/GHZ/Kochen-Specker-type theorems.
- It will be new to many physicists.
- It was folklore among those who think about entanglement and nonlocality.
I’ll be grateful for any references in support of the last point. Right now, all I can offer is that I gave almost the same argument four years ago, in my review of Stephen Wolfram’s A New Kind of Science (see pages 9-11). My goal there was to show that no deterministic cellular-automaton model of physics, of the sort Wolfram was advocating, could possibly explain the Bell inequality violations while respecting relativistic invariance. I didn’t think I was saying anything terribly new.
Conway and Kochen try to preempt such criticism as follows:
Physicists who feel that they already knew our main result are cautioned that it cannot be proved by arguments involving symbols such as , Ψ, ⊗, since these presuppose a large and indefinite amount of physical theory.
I find this unpersuasive. For me, the whole point of the Bell, GHZ, and Kochen-Specker type theorems has always been that they don’t presuppose quantum mechanics. Instead they show that any physical theory compatible with certain experimental results has to have certain properties (such as nonlocality or contextuality).
I should admit that the Free Will Theorem improves on the argument in my book review in at least three ways:
- It gets rid of probabilities, by going through a two-party version of the Kochen-Specker Theorem instead of through Bell’s inequality. (I mentioned in my review that the argument could be redone using the GHZ paradox, which involves three parties but is deterministic. I didn’t mention that it could also be done using two-party Kochen-Specker.)
- It gives a cute, memorable name — “free will” — to something that I referred to only by convoluted phrases like “randomness that’s more fundamental than the sort Wolfram allows” (by which I meant, that’s not reducible to Alice and Bob’s subjective uncertainty about the initial state of the universe).
- It makes the assumptions more explicit. For example, I never talked about Alice and Bob’s “free will” in choosing the detector settings, since I thought that was just assumed in talking about Bell’s inequality in the first place! (In other words, if Wolfram denied that Alice and Bob could choose the detector settings independently of each other, then he could have dispensed with Bell’s inequality in a much simpler way than he actually did.)
I should also admit that I like Conway and Kochen’s paper. Indeed, the main question it raises for me is not “how could they possibly pass this off as original?” but rather “do we, as scientists, sometimes put too high a premium on originality?”
In all the reading I’ve done in philosophy, I don’t know that I’ve ever once encountered an original idea — in the sense that, say, general relativity and NP-completeness were original ideas. Indeed, whenever I read about a priority dispute between philosophers (like the infamous one between Saul Kripke and Ruth Barcan Marcus), it strikes me as absurd: all the ideas under dispute seem obvious!
But does it follow that philosophy is a waste of time? No, I don’t think it does. The same “obvious” idea can be expressed clumsily or eloquently, sketched in a sentence or developed into a book, brought out explicitly or left beneath the surface. Now, I’m well aware that that’s not an original sentiment — nor, for that matter, is anything in this post, or probably this entire blog. Yet here I am writing it, and here you are reading it.
You might respond that Wolfram can (and does) mount a similar defense of A New Kind of Science: that sure, lesser mortals might have realized decades ago that simple programs can produce complex behavior, but they didn’t grasp the true, Earth-shattering significance of that fact. Compared to Wolfram, though, I think Conway and Kochen have at least two things going for them: (1) they don’t spend 1,200 pages denigrating the work of other people, and (2) they accept quantum mechanics.
All streams run to the sea,
but the sea is not full;
to the place where the streams flow,
there they continue to flow.
All things are wearisome;
more than one can express;
the eye is not satisfied with seeing,
or the ear filled with hearing.
What has been is what will be,
and what has been done is what will be done;
there is nothing new under the sun.