Silly philosophers
Jun. 24th, 2006 03:34 pmSo. Newcomb's paradox. It goes like this. You are invited to a game by someone known as the predictor. A psychic, a superintelligent psychologist, God - whatever. Someone who predicts what you're going to do. He presents you with two boxes: box A, and box B. You can choose to either take box B alone, or to take box A as well. Box A always contains $1000 (or, more generally, an amount of money I'll call a, a>0). The thing is, if the predictor has predicted that you'll take B alone, B will contain $1,000,000 (or, more generally, an amount of money I'll call b, b>a), but if he's predicted you'll take both, it won't contain anything at all. The "paradox" is that some people argue you should take B alone, for obvious reasons (a million vs a thousand!), while other people argue you should take both, for obvious reasons (regardless of what B contains, you'll get more this way). According to Wikipedia, "it is a much debated problem in the philosophical branch of decision theory but has received little attention from the mathematical side." Wonder why that is.. :P
Really, this is just silly. Wikipedia mentions that some have suggested that in the case of a reliable predictor, you should take B alone, while in the case of an unreliable predictor, you should take both. Hm, could we state this more precisely? Indeed we could! Let's assume that there is a probability, p_a, which is the probability that if you are going to take both, the predictor will predict correctly, and a probability, p_b, which is the probability that if you are going to take B alone, the predictor will predict correctly. (Reason for this assumption: However the predictor works, he must somehow get certain relevant information from you - whether it's directly out of your mind or by simply observing you normally - and then decide whether this information indicates B or both. So he has some way of interpreting the data to sort it into "B alone" or "both". So we can say there's some probability that a "B alone" signal will be correctly interpreted, and some probability that a "both" signal will be correctly interpreted.) Then it's easy to calculate the expected values for both picks; taking B alone has an expected value of bp_b, while taking both has an expected value of a+(1-p_a)b. Rearranging a bit, it comes down to a comparison between a/b and p_a+p_b-1. (If the former is higher, pick both, if the latter is higher, pick B alone.) So, as we are assuming a<b, if the predictor is, in fact, absolutely reliable, you should pick B, as was really obvious in the first place. Otherwise, it depends on just how big a is and just how reliable the predictor is. In the case of the original problem, a/b=1/1000, so if the predictor is anywhere near what he claims to be, you should probably pick B. (If the predictor instead makes his prediction by actually seeing the future, then the result still holds; he receives a fixed vision of the future, and he must determine whether it represents one box being taken or two. If this is an easy task, as you'd think it would be, you should always take just box B.) C'mon, philosophers, was that really so hard?
Basically the people who argue for always picking both, as I understand it, are doing so on the basis that the predictor's already made his choice, so you'll always get more by picking both. But this makes the fundamentally wrong assumption that just because it's already been done, it's independent of whatever you're going to do. (To use Douglas Hofstadter's example, two people are told to calculate 5*7. When they both get 35, this result is... completely unremarkable.) The very problem statement presupposes the existence of a predictor such that his prediction and your choice are *not* independent of each other. If you really believe in free will that strongly, your answer shouldn't be "pick both", it should be, "this problem is nonsense because it presupposes the existence of a predictor". Was that really that hard to notice?
Now, here's what I'm thinking. Hold a real-life version of Newcomb's Paradox. Of course, if I want to actually ever do this (not that I probably will, but...), we'd need to scale down the money amounts, but let's keep the ratio the same, 1/1000. Say, to make it really small, a penny vs. $10. Invite philosophers who have written about the problem. Make a great show of "examining" them - their face and their body as they think - before deciding which box B to use, then let them choose. If one of them goes up quickly to take a box, stop them and say that you haven't examined them enough to make a prediction yet. In actuality, your prediction will be based on whether, in their writings, they have argued for taking both boxes or just B. (Don't invite those who have provided more discriminating answers. I'm assuming, mind you, that there *are* a bunch of philosophers who have written that you should just always take both or always take B.) Those who have argued for taking both will presumably still take both. Those who have argued for taking just B might be a bit skeptical of your reliability (but that's what the act about "examining" them is for, to make it look like you can actually read people that well), but there's a good chance they'll take just B because they know that afterwards, the other philosophers will see that even though they'd written in favor of B, they picked both, so they'd be selling their pride for a penny. (Hopefully none of them figure out what's really going on, or they'll all pick both - though not if we keep them from conferring. But to prevent anyone from getting it in the first place, maybe it would be good to occasionally do an opposite prediction, or, better yet, don't reverse any of the predicitons, but instead invite a bunch of ordinary people as well - probably best if they're just plants, to convince them of your reliability.) This should probably be reliable enough to beat the 1/1000 ratio, and, surprise surprise! Those who pick B alone will usually do better! I would really like to see that, personally.
-Sniffnoy
Really, this is just silly. Wikipedia mentions that some have suggested that in the case of a reliable predictor, you should take B alone, while in the case of an unreliable predictor, you should take both. Hm, could we state this more precisely? Indeed we could! Let's assume that there is a probability, p_a, which is the probability that if you are going to take both, the predictor will predict correctly, and a probability, p_b, which is the probability that if you are going to take B alone, the predictor will predict correctly. (Reason for this assumption: However the predictor works, he must somehow get certain relevant information from you - whether it's directly out of your mind or by simply observing you normally - and then decide whether this information indicates B or both. So he has some way of interpreting the data to sort it into "B alone" or "both". So we can say there's some probability that a "B alone" signal will be correctly interpreted, and some probability that a "both" signal will be correctly interpreted.) Then it's easy to calculate the expected values for both picks; taking B alone has an expected value of bp_b, while taking both has an expected value of a+(1-p_a)b. Rearranging a bit, it comes down to a comparison between a/b and p_a+p_b-1. (If the former is higher, pick both, if the latter is higher, pick B alone.) So, as we are assuming a<b, if the predictor is, in fact, absolutely reliable, you should pick B, as was really obvious in the first place. Otherwise, it depends on just how big a is and just how reliable the predictor is. In the case of the original problem, a/b=1/1000, so if the predictor is anywhere near what he claims to be, you should probably pick B. (If the predictor instead makes his prediction by actually seeing the future, then the result still holds; he receives a fixed vision of the future, and he must determine whether it represents one box being taken or two. If this is an easy task, as you'd think it would be, you should always take just box B.) C'mon, philosophers, was that really so hard?
Basically the people who argue for always picking both, as I understand it, are doing so on the basis that the predictor's already made his choice, so you'll always get more by picking both. But this makes the fundamentally wrong assumption that just because it's already been done, it's independent of whatever you're going to do. (To use Douglas Hofstadter's example, two people are told to calculate 5*7. When they both get 35, this result is... completely unremarkable.) The very problem statement presupposes the existence of a predictor such that his prediction and your choice are *not* independent of each other. If you really believe in free will that strongly, your answer shouldn't be "pick both", it should be, "this problem is nonsense because it presupposes the existence of a predictor". Was that really that hard to notice?
Now, here's what I'm thinking. Hold a real-life version of Newcomb's Paradox. Of course, if I want to actually ever do this (not that I probably will, but...), we'd need to scale down the money amounts, but let's keep the ratio the same, 1/1000. Say, to make it really small, a penny vs. $10. Invite philosophers who have written about the problem. Make a great show of "examining" them - their face and their body as they think - before deciding which box B to use, then let them choose. If one of them goes up quickly to take a box, stop them and say that you haven't examined them enough to make a prediction yet. In actuality, your prediction will be based on whether, in their writings, they have argued for taking both boxes or just B. (Don't invite those who have provided more discriminating answers. I'm assuming, mind you, that there *are* a bunch of philosophers who have written that you should just always take both or always take B.) Those who have argued for taking both will presumably still take both. Those who have argued for taking just B might be a bit skeptical of your reliability (but that's what the act about "examining" them is for, to make it look like you can actually read people that well), but there's a good chance they'll take just B because they know that afterwards, the other philosophers will see that even though they'd written in favor of B, they picked both, so they'd be selling their pride for a penny. (Hopefully none of them figure out what's really going on, or they'll all pick both - though not if we keep them from conferring. But to prevent anyone from getting it in the first place, maybe it would be good to occasionally do an opposite prediction, or, better yet, don't reverse any of the predicitons, but instead invite a bunch of ordinary people as well - probably best if they're just plants, to convince them of your reliability.) This should probably be reliable enough to beat the 1/1000 ratio, and, surprise surprise! Those who pick B alone will usually do better! I would really like to see that, personally.
-Sniffnoy