Newcomb’s paradox

Newcomb’s paradox is a paradox because half of very smart people think one answer is obvious, and half of very smart people think the other answer is obvious, and both think the other is being stupid.

The scenario is that a demon that can accurately anticipate people’s actions, or an ai, or a committee of psychiatrists, or something like that, offers, you two boxes. A box that contains a thousand dollars, and a mystery box that may contain a lot more than a thousand dollars, or may contain nothing. If the demon has predicted you will open both boxes, he has put nothing in the mystery box. If the demon predicted you will open only the mystery box, and return the box that definitely contains a thousand dollars, he has put considerably more than a thousand dollars in the mystery box.

You know that this entity is very good at predicting such actions. Do you take both boxes, or only the mystery box.

What makes this paradoxical is that the demon and the action is morally neutral. So there is no right answer, even though pretty much everyone has unreasonable confidence that they have the right answer, there is no solution that can be attained rationally.

Let us suppose that it is not a demon, but a pastor of an underground church. The pastor is very good at judging people’s character, and you have agreed not to take the thousand dollar box. But because the church is underground, there will be no consequences for defecting on him, other than that he will have successfully tested his ability to judge people’s character. The only negative consequence for you is that he will think ill of you.

Well in this case, assuming he is good at assessing character, there is no paradox. The rational self interested action is to be the person you hope he thinks you are.

If it is possible to accurately judge character, and if character remains consistent over time, then rational self interest aligns with virtue — hence the tendency of smart people to behave better than stupid people, and for wicked people to deny that character is consistent over time. Supposedly, just because a criminal has been arrested three time for murdering three people, we cannot conclude that putting him in a oubliette and leaving him there forever is a good idea, and should instead release him almost immediately as this is costless and benevolent.

Cooperate/cooperate equilibrium is by definition preferable to defect/defect equilibrium. But the standard game theoretic analysis leads to the conclusion that cooperation is only rational is there are going to be a large and indefinitely number of future interactions. If there is a definite limit to the number of future interactions, or if terminating the relationship is relatively low cost, defection is rational. If, however, people have significant capability to predict cooperation, then cooperation becomes rational optimal under considerably broader circumstances.

This may explain why the doctrine of Predestination tends to bear good fruit, even though it would seem to imply the doctrine of Antinomianism, which heretical doctrine bears horrifyingly bad fruit.

Leave a Reply

Your email address will not be published. Required fields are marked *