## Quantum Computing Since Democritus Lecture 18: Free Will

If you don’t like this latest lecture, please don’t blame me: I had no choice! (Yeah, yeah, I know. You presumably have no choice in criticizing it either. But it can’t hurt to ask!)

Those of you who’ve been reading this blog since the dark days of 2005 might recognize some of the content from this post about Newcomb’s Paradox, and this one about the Free Will Theorem.

### 27 Responses to “Quantum Computing Since Democritus Lecture 18: Free Will”

1. wolfgang Says:

Silly question: In your solution to Newcomb’s problem, what will the ‘simulated you’ find when it opens the box?

2. Chris Granade Says:

The Predictor knows what he needs at that point, why not terminate the simulation? Or will that get us back to anthropic reasoning?

3. Mark Says:

Its a bit of a stretch to say Nozick came up with the “Predictor” thought experiment. A very similar idea is due to (iirc) John Calvin and has been, um, fairly widely propagated. I guess Nozick deserves some credit for actually pointing out that it is a *paradox* though!

4. Scott Says:

Wolfgang: That’s actually an extremely interesting question! In my solution, I was implicitly assuming that the ‘simulated you’ has no interests or goals that differ from those of the ‘real you.’ Or to put it more carefully: the fact that you know you might be the simulated you, should affect your behavior (since what you do will determine the box contents), but not your objective function (which remains focused on the real you). And thus, it doesn’t matter what the simulated you finds when it opens the box: maybe the million dollars are there, maybe they’re not, or maybe the simulated you simply vanishes right after it makes a decision, having fulfilled its sole purpose at that point.

At the very end of the lecture, I give a puzzle (due to the philosopher Adam Elga) involving Dr. Evil on a moon base, where Austin Powers’ proposed solution hinges on making the opposite assumption: namely, that a clone of Dr. Evil will have an objective function that differs from the original Dr. Evil’s objective function.

So, it seems that we can’t accept both my solution to Newcomb’s Problem, and Austin Powers’ solution to the Dr. Evil problem, without finding some essential difference between the two problems.

Maybe the difference is simply this: even supposing that the ‘simulated you’ in Newcomb’s Problem did have goals and interests that differed from those of the ‘real you’, we have no idea what those goals and interests are. Unlike with the Dr. Evil problem, the statement of Newcomb’s Problem doesn’t suggest an answer. Maybe the simulated you gets the same money reward as the real you, and thus their goals happen to be perfectly aligned. Or maybe the simulated you gets the opposite reward, in which case you’ll be torn as to which ‘you’ you want to benefit. Or maybe the simulated you really does have no goals apart from maximizing the utility of the real you. Given that uncertainty, one could argue that it makes sense to focus on the utility of the real you, which is the one utility that the problem clearly specifies.

5. Craig G Says:

I’m a one-boxer. I think the ‘taking both boxes argument is always better’ argument is flawed because it considers leafs that aren’t in the game tree.

Because we assumed the predictor was always correct, you *can’t* take both boxes when the million dollars is present. Doing that violates the assumptions of the problem. I don’t believe this is a good problem in a free will debate, because as stated it essentially* assumes free will doesn’t exist.

*Give the predictor a time machine. Whenever it predicts wrong, it simply goes back in time and tries again. This allows it to beat both probabilistic and free behaviour.

6. wolfgang Says:

Scott,

there is another interesting issue.
Assume that your solution is accepted by all philosophers, blog readers and other authorities, it gets published in the journals and on Wikipedia. In other words, every rational person knows that you should not open both boxes. (And in fact, every rational person should have come to your conclusion already…)
Would this not make the task of The Great Predictor much easier? In fact, everybody who could sufficiently pretend that she is a Predictor could then indeed predict the outcome…

7. Patrick Cruce Says:

I had two thoughts about the Newcomb problem.
1. If you flip a coin, you may not maximize your payout, but you will make the Predictor wrong. (Which might be the bigger payoff for the spiteful ones amongst us. 🙂 At the very least, this forces the Predictor to simulate, not only you, but as much of universe as you want.(depending on your random input)

2. I could decide to make my decision depending upon the result of some paradoxical problem. For example, I could select a random program from some language and run it on some Turing machine. If the program halts, I take box 2, if the program doesn’t halt, I take both boxes. Since the program has already made a decision, I have to surmise that the program halted,in the simulation world, otherwise the predictor would be crashed.

I think there are a couple of ways to interpret these results. Maybe I’m cheating, or maybe this proves that a Predictor could not exist, or maybe I can guarantee free will because I can do zany things like that, or maybe I don’t get to play with random strings.

I tend to think it shows a flaw in the experiment. If I see the predictor isn’t crashed, then I don’t have to do the experiment and if the predictor predicts that it won’t have crashed, it solved the halting problem, or I somehow turned the predictor into a means to collapse the future state(so that I always happen to pick halting programs).

8. asdf Says:

a certain company whose name begins with D is at it again. I think they are in a He-who-must-not-be-named doghouse around here but there is a slashdot post about it.

9. Scott Says:

Thanks, asdf! I need to glance at a Slashdot thread like that one every once in a while, to remind myself what I was put on earth for. (I have no response to anything there, other than to point people here and suggest they click around at random…)

10. Chris Granade Says:

Actually, I saw a few people posting comments on the Slashdot thread consisting almost entirely of links back here.

11. Scott Says:

Wolfgang: I see the problem as basically a game-theoretic one. Let’s suppose the predictors don’t have perfect information about the choosers’ intentions, but only noisy information (which is still better than nothing). Also, suppose it were known that everyone in the world had converted to one-boxerism. Then the predictors might grow lazy and always put the million dollars in. But then taking both boxes would become the dominant strategy for the choosers. But then the predictors would eventually wise up to that, and only put the million dollars in for choosers they were sure were one-boxers—so the dominant strategy for the choosers would revert to taking one box, and so on ad infinitum.

What that really means, however, is that the proportion of two-boxers will settle down to some equilibrium. In particular, suppose the predictor learns whether each particular chooser is a one-boxer or a two-boxer through a binary symmetric channel with noise rate η. And suppose the predictor gets utility 1 if it gives a one-boxer a million dollars or a two-boxer a thousand dollars, and utility 0 otherwise. Then the proportion of two-boxers in the population should reach some equilibrium 0<p(η)<1, the calculation of which I leave as an exercise for the reader.

12. cody Says:

This is a very interesting statement:

“If you had a copy of the Predictor’s computer, then the Predictor is screwed, right? But you don’t have a copy of the Predictor’s computer.”

Couldn’t you consider your body to be a copy of the Predictor’s computer? Of course, this leads to suspecting you should just negate whatever conclusion you arrive at… and leads me into that scene from The Princess Bride when Vizzini logically deducing which glass of wine not to drink.

A better solution: base your decision on extra-solar astronomical information immediately before choosing the box, as long as possible after Predictor has made her/his prediction, thereby forcing Predictor to simulate a less tractable instance of spacetime. If they have a good simulation of you, they’ll even know you are going to do it.

13. cody Says:

correction: … Vizzini is logically…

14. David Klempner Says:

I’m a compatibilist on the whole free will thing — which leaves me free to disagree with your dismantling of the anti-free-will “misconception”.

My view of it, a nutshell: why would free will be impossible in a deterministic Newtonian universe but possible in QM? What reasonable definition of free will cares about something you don’t have any control over?

15. Koray Says:

You wanted the class not to reject the premises, but I am a bad boy, so there… The ‘predictor’ has perfect information about the internals of every player and probably has no trouble with those who cannot decide (halt). That is some predictor. Of course the certificate of such a colossal statement would be overwhelming to us underlings who can’t even figure out how to play such a simple game.

In other words, if I can verify the assertion that the predictor indeed has cojones of epic proportions, then I probably know the answer to this puzzle, but more importantly, with all that information, now I too am a predictor and will kick proverbial hind regions at this game. (akin to the predictor-predictor mentioned in above comments)

If I’m to take a leap of faith that the predictor’s indeed that good, then I might take another leap and choose whatever I want.

Btw, I don’t even know what free will means, but with regards to the legal system, it’s not just about ‘payback’ or fixing broken minds before they can rejoin society. It’s also about prevention: these people are at higher risk of committing crime (our prediction). So if the two geniuses could make a case for determinism as the root cause of their crimes, they’d be locked up just the same. I don’t know what that lawyer was smoking.

16. Vilhelm S Says:

I find it interesting that you changed the Dr Evil example. In the paper you link, Austin Powers threatens to punish the replica doctor unless the replica surrenders, but in your example the only threat is that the replica would be killed when the real doctor attacks.

But this changes things! In your example, Dr Evil can reason as follows: “Either I am the replica; in this case nothing bad will happen when I press the red button. Or I am the real doctor; in that case my attack will kill the replica – but why should I care about that loser?”. So in either case, the rational response is to go ahead with the attack. In order to arrive at the conclusion that not attacking is the correct solution, you have to apply Hofstadter’s concept of superrationality.

I think superrationality is very relevant to Newcomb’s problem also. The reasoning that leads to two-boxing is basically Hofstadter’s “rational” argument: “whether the predictor was right or wrong, I’m better of taking both”, and the one-boxer are essentially being superrational (with the “other player” being his simulated copy in the Predictor’s computer).

17. Peter Morgan Says:

The analogy of teleport with computer move and copy-and-delete needs more. Generally for a computer move we copy the data we want to move, then remove the attribution of significance (we change a pointer to point to the new location). We don’t generally delete the original — zero-fill it, or random-fill n-times if it’s a hard drive — unless we’re worried that someone will read the computer’s memory and interpret it by content instead of using the official reference.
Equally, when I teleport to Mars, it’s not necessary to delete the original, I just have to remove legal significance, so the original can’t get at my bank account. The bank’s identity record, and the legal system generally, points to Mars. Once we take this kind of view, however, I could, instead of destroying the old me, come to a legally binding agreement as to how my property will be divided between the myself copies; destroying the old me is just an extreme form of legal contract that I could decide upon, assuming the governing laws allowed it. The law manages well enough with identical twins, although there is always the psychology of being the older twin.

18. Scott Says:

Thanks so much, Vilhelm! I must have subconsciously changed the experiment in that subtle way between reading the paper and talking about it. (Good thing I’m not some preliterate bard, and we do have a source to go back to.) Thanks also for pointing me to Hofstadter’s concept of “superrationality.”

Your argument that it matters whether the clone gets punished for pushing the button, depends on the assumption that the clone can act differently from the original Dr. Evil. My intuition was that, because they’re perfect clones, they both necessarily make the same choice—in which case superrationality becomes no different from ordinary rationality, and the argument against pushing the button goes through. Only if Dr. Evil and his clone physically can act differently, do we have to assume either that the clone will be punished for pushing the button, that they’re superrational, or that the original Dr. Evil cares about the clone.

19. Scott Says:

Peter: If there were a clone of me running around with all the same memories, I think I’d have bigger concerns than protecting my bank account. 🙂

20. Patrick Cruce Says:

I like that idea of using some astronomical event. Then the predictor may have to simulate the whole universe.

I’m not sure you’d need a copy of the predictor. The problem stipulates that the predictor simulated you prior to you showing up to make your decision. I just make simulating me as difficult as solving the halting problem.

So that leaves several possibilities:
#1 In a certain % of worlds the Predictor never halts and thus never offers me the opportunity.(This means I never have to run non-halting programs, because I never reach the point where *I* have to make a decision)
#2 The Newcomb problem has a hidden stipulation that I always pick halting strings.

That would put me in a very strange situation if I actually get an opportunity to make a decision. I’m picking out of a box of “random” programs, and no matter which string I pick, I happen to pick a halting program. I know this, because if I had picked a non-halting program, the simulation wouldn’t have halted and I wouldn’t be granted the opportunity to choose.

So If I set the intentionality right now, that I’m going to test a program to see if it halting, then I can guarantee that if I ever find a predictor, then I’ll not only have a means of winning $1M, but also an oracle for the halting problem. I’m not even cheating. A rational actor would prefer$1M and an oracle, over \$1M alone.

21. Blake Stacey Says:

I like that idea of using some astronomical event. Then the predictor may have to simulate the whole universe.

For a different kind of fun, you could base your decision on some mathematical result, one which is in principle knowable but which you don’t know. “If the third digit in the exponent specifying the fiftieth Mersenne prime is even, I will choose both boxes.”

22. Nick Tarleton Says:

Obvious stupid question: faced with a Predictor that’s always been right, why shouldn’t I conclude that it has a time machine, and take one box without having to think about simulations, halting, etc?

23. Scott Says:

Don’t be ridiculous, Nick! The predictor must be simulating your brain atom-by-atom; it’s just old-fashioned common sense that there are no time machines. 🙂

Patrick: If you base your decision on a halting problem, then the Predictor can unconditionally take the action appropriate to problems the do halt, without caring whether your particular problem instance does. Then the Predictor isn’t stuck in a loop, and the real you does have to actually run the program. If it halts, then the Predictor was right. If not, then you wait forever, and no one ever knows that the Predictor was wrong, so you get neither the prize nor a halting oracle.

Blake: You don’t save yourself any work that way. If you go and compute the answer before picking boxes, then you’ll know the answer, but you could do that just as well without any Predictor involved. It doesn’t even make you particularly hard to predict (no superhuman resources required on the part of the Predictor, other than knowing which problem you’ll use). If you don’t compute the answer but rather depend on the Predictor to tell you, then the Predictor doesn’t have to compute the answer either, it can use the same algorithm you did to actually pick boxes.

25. matt Says:

Since Scott loves to make fun of Bayesians, let me apply the Bayesian point of view here. I know that there is some person calling himself the “Predictor”, who has successfully predicted which box a person will pick many times. Let’s say he’s done this 100 times. So, my hypotheses are: 1) this indeed is an alien who can perfectly predict what anyone will do 2)this is a very rich and very eccentric person, who has gotten lucky 100 times in a row 3)this is a scam-there is some smoke and mirrors to move the money after I make my pick. We could also allow other possibilities, like in previous trials the person was actually a confederate of the Predictor, if you like.

The only probability I can really compute here is the probability of getting lucky 100 times straight: 2^{-100}. But I think (1) is really unlikely also, and I believe that (3) is possible. So, I think it’s most likely to be case (3). So, I’ll pick one box.

26. matt Says:

Actually, I guess I can’t even compute the probability of getting lucky 100 times: maybe people tend to pick 1 box with 90% probability or something, and using a little psychology the chances of being right 100 times straight are a lot higher than 2^{-100}.

27. Dmitry Says:

By the way, Conway and Kochen have just published a new version of their argument: The Strong Free Will Theorem, together with some replies to critics.