Does dissolving newcomb's paradox matter?
Context: Newcomb's Paradox is a problem in decision theory. Omega swoops in and places two boxes in front of you. One is transparent and contains $1'000. One is opaque and contains either $1'000'000 if Omega thinks you'll only take one box or $0 if Omega thinks you'll take two boxes.
Omega is a perfect predictor. Do you take one or both boxes?
Newcomb' problem feels like a paradox. Nina says it isn't. Her case is that if you're faced with a perfect predictor (or even a better than chance one but let's do the simple case first) you basically don't have a choice. Hence all the talk of what choice you will make doesn't really make sense. Talking about whether you'll choose to take one box or two after Omega has predicted your action is like asking whether a printed circuit board with a set configuration should "choose" to output a 0 or 1. It's just fundamentally asking a question that doesn't make sense and all the apparent weirdness and paradoxical nature that follows stems from asking a non-sensical question.
I basically agree with this claim. I also think it's an insight that's not that important. Let's talk about two kinds of choice:
- choice in the moment
- choice of what kind of agent to be
I think it's correct that talking about "choice" in the moment is misguided. If omega is a perfect predictor, you don't really have a choice at the point at which omega has left and you have two boxes. Or you do in some kind of compatibilist sense that we may care about morally but not in the decision theoretic sense. I think that a different kind of choice you have is what kind of agent you want to be/what kind of decision making algorithm you want to use generally. This second kind of choice is not impacted by omega being a perfect predictor. It happens before Omega swoops in. For this choice, Newcomb's problem still is fairly interesting.
I guess my meta level thoughts on why Newcomb's problem is worth thinking about go something like this
- Agents have to make decisions about what actions to take (or, to put it differently they have to implement a certain decision making algorithm)
- What algorithm an ideal agent should implement is a pretty important question
- If you think about decision theory a bit, you'll probably end up either believing in causal or evidential decision theory
- Both of these seem to make sense, but have various cases where they obviously fail
- In Newcomb's Problem causal decision theory fails
- In the smoking lesion case evidential decision theory fails (There's a gene which makes you want to smoke. It also means you have a much higher risk of cancer. Smoking doesn't otherwise cause cancer. You want to smoke. Should you avoid smoking because of cancer risk? Evidential decision theory says yes. Causal says no. Causal is right in this case.)
- the core problem in decision theory is reconciling these various cases and finding a theory which works generally
Newcomb's problem is thus still important and interesting even if you don't think it's a paradox. Although saying that it does feel like I'm basically agreeing with Nina in that the paradox can be dissolved. It's just that I don't think dissolving the paradox actually does much philosophically.
Member discussion