I’m trying to probe the confines of what the “oracle” or “predictor” is and can actually do. If it gets a pass by saying “oh, the choice will be random, no soup for you”, then I suppose that’s part of the “paradox”. It seems though like the paradox is rather limited in scope. I feel a weird inking that a lot of decision theory basically posits a totally rational world at the outset, when that’s far from being the case. Hence to me its utility is rather limited, other than to maybe make statistical predictions about things. And, if all the “oracle” is is a really good guesser, then what sort of special power is that? From the way the paradox is framed, it seems like the “oracle” has some magical ability to predict what I’m going to do. Well, it’s easy enough for me to turn that decision over to some other thing X, so that the predictor’s actual job is to predict the behavior of X, rather than me. And if that’s the case, then it’s easy to tie it into knots by giving it a problem it can’t solve. Basically in my mind it boils down to being somewhat pointless, like “all powerful being moving immovable object”.

I guess when I think of “paradox”, I think of Shrodinger’s Cat or something similar where there’s a real underlying philosophical question of interpretation that may also spill over into experimentation. Hence to me, Newcomb’s Paradox then really only exists as a paradox if you confine it to a rather limited “space” governed by the mathematics of decision theory, rather than being a statement about the larger universe and how it may operate.

]]>In any case the usual response to this is to restate the problem as to say, if the predictor predicts that you will randomize, it does not place the money in the box.

]]>Ah, o.k. So, when I walk into the room I replace step 2 with:

I have a smartphone in my pocket. I pull it out and bring up the hot new app “Am I Halt Or Not”. You can swipe left or right — left brings up a new random program submitted by various random people on the internet, right selects a particular random program. Once selected, I provide some “random” input using finger-wiggling on the screen and hit “go”. After a certain amount of time, I hit “stop” and an indicator will be either green or red — green if the program wasn’t halted, red if it was halted.

I make the same set of decisions based on the result. To me the temporal bit of when the Predictor makes its prediction is less important, but again maybe I’m somehow violating the rules of the game.

]]>Assuming the usual rules, the Predictor has already halted by the time you are offered the choice. Allowing you to choose before then would defeat the purpose, since the eventual “prediction” would be of an action you’ve already taken. ]]>

Thanks for your response, I appreciate your time! That makes sense. Is it possible to simply know if the Predictor is or is not “halted” within the bounds of the paradox? To me, being able to know the answer to that question at any time T I choose is essentially the same thing as being able to peer inside the Predictor, maybe examine source code, or do other actions to determine if it will “halt” in the future. Because, regardless of how much “access” I have to the Predictor, the Predictor’s job is still essentially the same (“will I halt at time T?”). I don’t know how “time” formally plays into the bounds of the paradox, but I envision “reducing” myself to a very simple “machine” w.r.t. the Predictor:

1.) I walk into the room. There are two boxes and I can pick box b or both a & b.

2.) I wait some amount of time T which I determine at random in the moment (maybe I pick it beforehand, maybe I just stand around awhile until I feel like making my observation). At the end of that time, I observe if the Predictor has “halted”. Maybe there’s a glowing red LED on the front and I observe whether or not that LED is still lit to determine if it has “halted”, or some other set of physical indicators that show the machine’s current state. This requires me to know “something” about the Predictor, but I don’t need to know anything beyond what I can physically observe in the moment. If the Predictor is some super-intelligent being, then I observe whether or not said being is “alive” at the moment.

3.) If the Predictor has “halted”, I take both boxes. Otherwise I take box b.

It seems that if I want to limit this to a purely decision-theory problem, it’s necessary to completely isolate the Predictor from me so I can’t make a decision based on its current observed state. It also seems that by making my decision in this way, I win because an inherent property of the Predictor seems to be that it doesn’t halt/die at random points in time. Hence predicting that it will be “alive”/not halted is a safe bet. Also, this implies that “oracle”-like access to the Predictor is, in the context of this problem, essentially the same as being able to observe the Predictor’s current state in some way.

]]>The Nevanlinna winner, Costis Daskalakis, by contrast, is a former colleague of mine from the theory group at MIT. I know him quite well, and also know some of his great results—most famously on the hardness of finding Nash equilibria, but also some of his other stuff on e.g. the complexity of auctions. Huge congratulations to Costis!

]]>Can you refer to the most up to date survey or a book on quantum computer hardware/design? A text written for physicists would be great.

thanks

a.