Philosophy:Causal decision theory
Causal decision theory (CDT) is a school of thought within decision theory which states that, when a rational agent is confronted with a set of possible actions, one should select the action which causes the best outcome in expectation. CDT contrasts with evidential decision theory (EDT), which recommends the action which would be indicative of the best outcome if one received the "news" that it had been taken.[1] In other words, EDT recommends to "do what you most want to learn that you will do."[2]:7
Informal description
Informally, causal decision theory recommends the agent to make the decision with the best expected causal consequences. For example: if eating an apple will cause you to be happy and eating an orange will cause you to be sad then you would be rational to eat the apple. One complication is the notion of expected causal consequences. Imagine that eating a good apple will cause you to be happy and eating a bad apple will cause you to be sad but you aren't sure if the apple is good or bad. In this case you don't know the causal effects of eating the apple. Instead, then, you work from the expected causal effects, where these will depend on three things: (1) how likely you think the apple is to be good and how likely you think it is to be bad; (2) how happy eating a good apple makes you; and (3) how sad eating a bad apple makes you. In informal terms, causal decision theory advises the agent to make the decision with the best expected causal effects.
Formal description
In a 1981 article, Allan Gibbard and William Harper explained causal decision theory as maximization of the expected utility [math]\displaystyle{ U }[/math] of an action [math]\displaystyle{ A }[/math] "calculated from probabilities of counterfactuals":[3]
- [math]\displaystyle{ U(A)=\sum\limits_{j} P(A \gt O_j) D(O_j), }[/math]
where [math]\displaystyle{ D(O_j) }[/math] is the desirability of outcome [math]\displaystyle{ O_j }[/math] and [math]\displaystyle{ P(A \gt O_j) }[/math] is the counterfactual probability that, if [math]\displaystyle{ A }[/math] were done, then [math]\displaystyle{ O_j }[/math] would hold.
Difference from evidential decision theory
David Lewis proved[4] that the probability of a conditional [math]\displaystyle{ P(A \gt O_j) }[/math] does not always equal the conditional probability [math]\displaystyle{ P(O_j | A) }[/math].[5] (see also Lewis's triviality result) If that were the case, causal decision theory would be equivalent to evidential decision theory, which uses conditional probabilities.
Gibbard and Harper showed that if we accept two axioms (one related to the controversial principle of the conditional excluded middle[6]), then the statistical independence of [math]\displaystyle{ A }[/math] and [math]\displaystyle{ A \gt O_j }[/math] suffices to guarantee that [math]\displaystyle{ P(A \gt O_j) = P(O_j | A) }[/math]. However, there are cases in which actions and conditionals are not independent. Gibbard and Harper give an example in which King David wants Bathsheba but fears that summoning her would provoke a revolt.
Further, David has studied works on psychology and political science which teach him the following: Kings have two personality types, charismatic and uncharismatic. A king's degree of charisma depends on his genetic make-up and early childhood experiences, and cannot be changed in adulthood. Now, charismatic kings tend to act justly and uncharismatic kings unjustly. Successful revolts against charismatic kings are rare, whereas successful revolts against uncharismatic kings are frequent. Unjust acts themselves, though, do not cause successful revolts; the reason uncharismatic kings are prone to successful revolts is that they have a sneaky, ignoble bearing. David does not know whether or not he is charismatic; he does know that it is unjust to send for another man's wife. (p. 164)
In this case, evidential decision theory recommends that David abstain from Bathsheba, while causal decision theory—noting that whether David is charismatic or uncharismatic cannot be changed—recommends sending for her.
When required to choose between causal decision theory and evidential decision theory, philosophers usually prefer causal decision theory.[7] Due to a survey among professional philosophers published in 2021, 27.1% of philosophers chose EPT while 29.9% of them chose CPT.[8]
Thought experiments
Different decision theories are often examined in their recommendations for action in different thought experiments.
Newcomb's paradox
In Newcomb's paradox, there is a predictor, a player, and two boxes designated A and B. The predictor is able to reliably predict the player's choices— say, with 99% accuracy. The player is given a choice between taking only box B, or taking both boxes A and B. The player knows the following:[9]
- Box A is transparent and always contains a visible $1,000.
- Box B is opaque, and its content has already been set by the predictor:
- If the predictor has predicted the player will take both boxes A and B, then box B contains nothing.
- If the predictor has predicted that the player will take only box B, then box B contains $1,000,000.
The player does not know what the predictor predicted or what box B contains while making the choice. Should the player take both boxes, or only box B?
Causal decision theory recommends taking both boxes in this scenario, because at the moment when the player must make a decision, the predictor has already made a prediction (therefore, the action of the player will not affect the outcome).
Conversely, evidential decision theory (EDT) would have recommended that the player takes only box B because taking only box B is strong evidence that the predictor anticipated that the player would only take box B, and therefore it is very likely that box B contains $1,000,000. Conversely, choosing to take both boxes is strong evidence that the predictor knew that the player would take both boxes; therefore we should expect that box B contains nothing.[10]:22
Criticism
Vagueness
The theory of causal decision theory (CDT) does not itself specify what algorithm to use to calculate the counterfactual probabilities.[6] One proposal is the "imaging" technique suggested by Lewis:[11] To evaluate [math]\displaystyle{ P(A \gt O_j) }[/math], move probability mass from each possible world [math]\displaystyle{ w }[/math] to the closest possible world [math]\displaystyle{ w_A }[/math] in which [math]\displaystyle{ A }[/math] holds, assuming [math]\displaystyle{ A }[/math] is possible. However, this procedure requires that we know what we would believe if we were certain of [math]\displaystyle{ A }[/math]; this is itself a conditional to which we might assign probability less than 1, leading to regress.[6]
Counterexamples
There are innumerable "counterexamples" where, it is argued, a straightforward application of CDT fails to produce a defensibly "sane" decision. Philosopher Andy Egan argues this is due to a fundamental disconnect between the intuitive rational rule, "do what you expect will bring about the best results", and CDT's algorithm of "do whatever has the best expected outcome, holding fixed our initial views about the likely causal structure of the world." In this view, it is CDT's requirement to "hold fixed the agent’s unconditional credences in dependency hypotheses" that leads to irrational decisions.[12]
An early alleged counterexample is Newcomb's problem. Because your choice of one or two boxes can't causally affect the Predictor's guess, causal decision theory recommends the two-boxing strategy.[3] However, this results in getting only $1,000, not $1,000,000. Philosophers disagree whether one-boxing or two-boxing is the "rational" strategy.[13] Similar concerns may arise even in seemingly-straightforward problems like the prisoner's dilemma,[14] especially when playing opposite your "twin" whose choice to cooperate or defect correlates strongly, but is not caused by, your own choice.[15]
In the "Death in Damascus" scenario, an anthropomorphic "Death" predicts where you will be tomorrow, and goes to wait for you there. As in Newcomb's problem, we postulate that Death is a reliable predictor. A CDT agent would be unable to process the correlation, and may as a consequence make irrational decisions:[12][16][17]
Recently, a few variants of Death in Damascus have been proposed in which following CDT’s recommendations voluntarily loses money or, relatedly, forgoes a guaranteed payoff.[18][19][20] One example is the Adversarial Offer:[19] "Two boxes are on offer. A buyer may purchase one or none of the boxes but not both. Each of the two boxes costs $1. Yesterday, the seller put $3 in each box that she predicted the buyer not to acquire. Both the seller and the buyer believe the seller’s prediction to be accurate with probability 0.75." Adopting the buyer's perspective, CDT reasons that at least one box contains $3. Therefore, the average box contains at least $1.50 in causal expected value, which is more than the cost. Hence, CDT requires buying one of the two boxes. However, this is profitable for the seller.
Another recent counterexample is the "Psychopath Button":[12][21]
Paul is debating whether to press the ‘kill all psychopaths’ button. It would, he thinks, be much better to live in a world with no psychopaths. Unfortunately, Paul is quite confident that only a psychopath would press such a button. Paul very strongly prefers living in a world with psychopaths to dying. Should Paul press the button?
According to Egan, "pretty much everyone" agrees that Paul should not press the button, yet CDT endorses pressing the button.[12]
Philosopher Jim Joyce, perhaps the most prominent modern defender of CDT,[22] argues that CDT naturally is capable of taking into account any "information about what one is inclined or likely to do as evidence". This interpretation of CDT would require solving additional issues: How can a CDT agent avoid stumbling into having beliefs related to its own future acts, and thus becoming provably inconsistent via Gödelian incompleteness and Löb's theorem? How does the agent standing on a cliff avoid inferring that if he were to jump, he would probably have a parachute to break his fall?[23][24]
See also
- Decision making
- Evidential decision theory
- Expected utility hypothesis
- Game theory
- Newcomb's paradox
Notes
- ↑ Ahmed, Arif (2021). Evidential Decision Theory. Cambridge University Press. p. 7. ISBN 9781108607865.
- ↑ Ahmed, Arif (2021). Evidential Decision Theory. Cambridge University Press. ISBN 9781108607865.
- ↑ 3.0 3.1 Gibbard, A.; Harper, W.L. (1981), "Counterfactuals and two kinds of expected utility", Ifs: Conditionals, Beliefs, Decision, Chance, and Time: 153–190
- ↑ Lewis, D. (1976), "Probabilities of conditionals and conditional probabilities", The Philosophical Review 85 (3): 297–315, doi:10.2307/2184045
- ↑ In fact, Lewis proved a stronger result: "if a class of probability functions is closed under conditionalizing, then there can be no probability conditional for that class unless the class consists entirely of trivial probability functions," where a trivial probability function is one that "never assigns positive probability to more than two incompatible alternatives, and hence is at most four-valued [...]."
- ↑ 6.0 6.1 6.2 Shaffer, Michael John (2009), "Decision Theory, Intelligent Planning and Counterfactuals", Minds and Machines 19 (1): 61–92, doi:10.1007/s11023-008-9126-2, https://philpapers.org/rec/SHADTI
- ↑ Weirich, Paul, "Causal Decision Theory", The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), URL = plato
.stanford .edu /archives /win2016 /entries /decision-causal / - ↑ Yaden, David (2021). "The psychology of philosophy: Associating philosophical views with psychological traits in professional philosophers". Philosophical Psychology 34 (5): 721,755. doi:10.1080/09515089.2021.1915972.
- ↑ Wolpert, D. H.; Benford, G. (June 2013). "The lesson of Newcomb's paradox". Synthese 190 (9): 1637–1646. doi:10.1007/s11229-011-9899-3.
- ↑ Ahmed, Arif (2021). Evidential Decision Theory. Cambridge University Press. ISBN 9781108607865.
- ↑ Lewis, D. (1981), "Causal decision theory", Australasian Journal of Philosophy 59 (1): 5–30, doi:10.1080/00048408112340011, http://www.informaworld.com/index/739194078.pdf, retrieved 2009-05-29
- ↑ 12.0 12.1 12.2 12.3 Egan, A. (2007), "Some counterexamples to causal decision theory", The Philosophical Review 116 (1): 93–114, doi:10.1215/00318108-2006-023, http://andyegan.net/Andy_Egan/Papers_files/nocdt.2006.06.28.pdf, retrieved 2017-07-27
- ↑ Bellos, Alex (28 November 2016). "Newcomb's problem divides philosophers. Which side are you on?". The Guardian. https://www.theguardian.com/science/alexs-adventures-in-numberland/2016/nov/28/newcombs-problem-divides-philosophers-which-side-are-you-on.
- ↑ Lewis, D. (1979), "Prisoners' dilemma is a Newcomb problem", Philosophy & Public Affairs 8 (3): 235–240
- ↑ Howard, J. V. (May 1988). "Cooperation in the Prisoner's Dilemma". Theory and Decision 24 (3): 203–213. doi:10.1007/BF00148954.
- ↑ Meacham, Christopher JG. "Binding and its consequences." Philosophical studies 149.1 (2010): 49-71.
- ↑ Harper, William (January 1984). "Ratifiability and Causal Decision Theory: Comments on Eells and Seidenfeld". PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1984 (2): 213–228. doi:10.1086/psaprocbienmeetp.1984.2.192506.
- ↑ Spencer, J. (2020), "An argument against causal decision theory", Analysis 81: 52–61, doi:10.1093/analys/anaa037, http://www.jackspencer.org/uploads/1/4/0/3/14038590/cdt_and_the_guaranteed_principle.pdf, retrieved 2021-04-23
- ↑ 19.0 19.1 Oesterheld, C.; Conitzer, V. (2021), "Extracting Money from Causal Decision Theorists", The Philosophical Quarterly 71 (4), doi:10.1093/pq/pqaa086
- ↑ Joyce, James M., Yet Another Refutation of Causal Decision Theory?, http://www-personal.umich.edu/~jjoyce/papers/UI.pdf, retrieved 2021-04-23
- ↑ Greaves, Hilary. "Epistemic decision theory." Mind 122.488 (2013): 915-952.
- ↑ Wedgwood, Ralph. "Gandalf’s solution to the Newcomb problem." Synthese (2013): 1-33.
- ↑ Weirich, Paul, "Causal Decision Theory", The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), URL = plato
.stanford .edu /archives /win2016 /entries /decision-causal / - ↑ Joyce, James M. "Regret and instability in causal decision theory." Synthese 187.1 (2012): 123-145.
External links
- Causal Decision Theory at the Stanford Encyclopedia of Philosophy
- The Logic of Conditionals at the Stanford Encyclopedia of Philosophy
Original source: https://en.wikipedia.org/wiki/Causal decision theory.
Read more |