Rubin causal model
The Rubin causal model (RCM), also known as the Neyman–Rubin causal model,[1] is an approach to the statistical analysis of cause and effect based on the framework of potential outcomes, named after Donald Rubin. The name "Rubin causal model" was first coined by Paul W. Holland.[2] The potential outcomes framework was first proposed by Jerzy Neyman in his 1923 Master's thesis,[3] though he discussed it only in the context of completely randomized experiments.[4] Rubin extended it into a general framework for thinking about causation in both observational and experimental studies.[1]
Introduction
The Rubin causal model is based on the idea of potential outcomes. For example, a person would have a particular income at age 40 if they had attended college, whereas they would have a different income at age 40 if they had not attended college. To measure the causal effect of going to college for this person, we need to compare the outcome for the same individual in both alternative futures. Since it is impossible to see both potential outcomes at once, one of the potential outcomes is always missing. This dilemma is the "fundamental problem of causal inference."[2]
Because of the fundamental problem of causal inference, unit-level causal effects cannot be directly observed. However, randomized experiments allow for the estimation of population-level causal effects.[5] A randomized experiment assigns people randomly to treatments: college or no college. Because of this random assignment, the groups are (on average) equivalent, and the difference in income at age 40 can be attributed to the college assignment since that was the only difference between the groups. An estimate of the average causal effect (also referred to as the average treatment effect or ATE) can then be obtained by computing the difference in means between the treated (college-attending) and control (not-college-attending) samples.
In many circumstances, however, randomized experiments are not possible due to ethical or practical concerns. In such scenarios there is a non-random assignment mechanism. This is the case for the example of college attendance: people are not randomly assigned to attend college. Rather, people may choose to attend college based on their financial situation, parents' education, and so on. Many statistical methods have been developed for causal inference, such as propensity score matching. These methods attempt to correct for the assignment mechanism by finding control units similar to treatment units.
An extended example
Rubin defines a causal effect:
Intuitively, the causal effect of one treatment, E, over another, C, for a particular unit and an interval of time from [math]\displaystyle{ t_1 }[/math] to [math]\displaystyle{ t_2 }[/math] is the difference between what would have happened at time [math]\displaystyle{ t_2 }[/math] if the unit had been exposed to E initiated at [math]\displaystyle{ t_1 }[/math] and what would have happened at [math]\displaystyle{ t_2 }[/math] if the unit had been exposed to C initiated at [math]\displaystyle{ t_1 }[/math]: 'If an hour ago I had taken two aspirins instead of just a glass of water, my headache would now be gone,' or 'because an hour ago I took two aspirins instead of just a glass of water, my headache is now gone.' Our definition of the causal effect of the E versus C treatment will reflect this intuitive meaning."[5]
According to the RCM, the causal effect of your taking or not taking aspirin one hour ago is the difference between how your head would have felt in case 1 (taking the aspirin) and case 2 (not taking the aspirin). If your headache would remain without aspirin but disappear if you took aspirin, then the causal effect of taking aspirin is headache relief. In most circumstances, we are interested in comparing two futures, one generally termed "treatment" and the other "control". These labels are somewhat arbitrary.
Potential outcomes
Suppose that Joe is participating in an FDA test for a new hypertension drug. If we were omniscient, we would know the outcomes for Joe under both treatment (the new drug) and control (either no treatment or the current standard treatment). The causal effect, or treatment effect, is the difference between these two potential outcomes.
subject | [math]\displaystyle{ Y_t(u) }[/math] | [math]\displaystyle{ Y_c(u) }[/math] | [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] |
---|---|---|---|
Joe | 130 | 135 | −5 |
[math]\displaystyle{ Y_t(u) }[/math] is Joe's blood pressure if he takes the new pill. In general, this notation expresses the potential outcome which results from a treatment, t, on a unit, u. Similarly, [math]\displaystyle{ Y_c(u) }[/math] is the effect of a different treatment, c or control, on a unit, u. In this case, [math]\displaystyle{ Y_c(u) }[/math] is Joe's blood pressure if he doesn't take the pill. [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] is the causal effect of taking the new drug.
From this table we only know the causal effect on Joe. Everyone else in the study might have an increase in blood pressure if they take the pill. However, regardless of what the causal effect is for the other subjects, the causal effect for Joe is lower blood pressure, relative to what his blood pressure would have been if he had not taken the pill.
Consider a larger sample of patients:
subject | [math]\displaystyle{ Y_t(u) }[/math] | [math]\displaystyle{ Y_c(u) }[/math] | [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] |
---|---|---|---|
Joe | 130 | 135 | −5 |
Mary | 140 | 150 | −10 |
Sally | 135 | 125 | 10 |
Bob | 135 | 150 | −15 |
The causal effect is different for every subject, but the drug works for Joe, Mary and Bob because the causal effect is negative. Their blood pressure is lower with the drug than it would have been if each did not take the drug. For Sally, on the other hand, the drug causes an increase in blood pressure.
In order for a potential outcome to make sense, it must be possible, at least a priori. For example, if there is no way for Joe, under any circumstance, to obtain the new drug, then [math]\displaystyle{ Y_t(u) }[/math] is impossible for him. It can never happen. And if [math]\displaystyle{ Y_t(u) }[/math] can never be observed, even in theory, then the causal effect of treatment on Joe's blood pressure is not defined.
No causation without manipulation
The causal effect of new drug is well defined because it is the simple difference of two potential outcomes, both of which might happen. In this case, we (or something else) can manipulate the world, at least conceptually, so that it is possible that one thing or a different thing might happen.
This definition of causal effects becomes much more problematic if there is no way for one of the potential outcomes to happen, ever. For example, what is the causal effect of Joe's height on his weight? Naively, this seems similar to our other examples. We just need to compare two potential outcomes: what would Joe's weight be under the treatment (where treatment is defined as being 3 inches taller) and what would Joe's weight be under the control (where control is defined as his current height).
A moment's reflection highlights the problem: we can't increase Joe's height. There is no way to observe, even conceptually, what Joe's weight would be if he were taller because there is no way to make him taller. We can't manipulate Joe's height, so it makes no sense to investigate the causal effect of height on weight. Hence the slogan: No causation without manipulation.
Stable unit treatment value assumption (SUTVA)
We require that "the [potential outcome] observation on one unit should be unaffected by the particular assignment of treatments to the other units" (Cox 1958, §2.4). This is called the stable unit treatment value assumption (SUTVA), which goes beyond the concept of independence.
In the context of our example, Joe's blood pressure should not depend on whether or not Mary receives the drug. But what if it does? Suppose that Joe and Mary live in the same house and Mary always cooks. The drug causes Mary to crave salty foods, so if she takes the drug she will cook with more salt than she would have otherwise. A high salt diet increases Joe's blood pressure. Therefore, his outcome will depend on both which treatment he received and which treatment Mary receives.
SUTVA violation makes causal inference more difficult. We can account for dependent observations by considering more treatments. We create 4 treatments by taking into account whether or not Mary receives treatment.
subject | Joe = c, Mary = t | Joe = t, Mary = t | Joe = c, Mary = c | Joe = t, Mary = c |
---|---|---|---|---|
Joe | 140 | 130 | 125 | 120 |
Recall that a causal effect is defined as the difference between two potential outcomes. In this case, there are multiple causal effects because there are more than two potential outcomes. One is the causal effect of the drug on Joe when Mary receives treatment and is calculated, [math]\displaystyle{ 130 - 140 }[/math]. Another is the causal effect on Joe when Mary does not receive treatment and is calculated [math]\displaystyle{ 120 - 125 }[/math]. The third is the causal effect of Mary's treatment on Joe when Joe is not treated. This is calculated as [math]\displaystyle{ 140 - 125 }[/math]. The treatment Mary receives has a greater causal effect on Joe than the treatment which Joe received has on Joe, and it is in the opposite direction.
By considering more potential outcomes in this way, we can cause SUTVA to hold. However, if any units other than Joe are dependent on Mary, then we must consider further potential outcomes. The greater the number of dependent units, the more potential outcomes we must consider and the more complex the calculations become (consider an experiment with 20 different people, each of whose treatment status can effect outcomes for every one else). In order to (easily) estimate the causal effect of a single treatment relative to a control, SUTVA should hold.
Average causal effect
Consider:
subject | [math]\displaystyle{ Y_t(u) }[/math] | [math]\displaystyle{ Y_c(u) }[/math] | [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] |
---|---|---|---|
Joe | 130 | 135 | −5 |
Mary | 130 | 145 | −15 |
Sally | 130 | 145 | −15 |
Bob | 140 | 150 | −10 |
James | 145 | 140 | +5 |
MEAN | 135 | 143 | −8 |
One may calculate the average causal effect (also known as the average treatment effect or ATE) by taking the mean of all the causal effects.
How we measure the response affects what inferences we draw. Suppose that we measure changes in blood pressure as a percentage change rather than in absolute values. Then, depending in the exact numbers, the average causal effect might be an increase in blood pressure. For example, assume that George's blood pressure would be 154 under control and 140 with treatment. The absolute size of the causal effect is −14, but the percentage difference (in terms of the treatment level of 140) is −10%. If Sarah's blood pressure is 200 under treatment and 184 under control, then the causal effect in 16 in absolute terms but 8% in terms of the treatment value. A smaller absolute change in blood pressure (−14 versus 16) yields a larger percentage change (−10% versus 8%) for George. Even though the average causal effect for George and Sarah is +2 in absolute terms, it is −2 in percentage terms.
The fundamental problem of causal inference
The results we have seen up to this point would never be measured in practice. It is impossible, by definition, to observe the effect of more than one treatment on a subject over a specific time period. Joe cannot both take the pill and not take the pill at the same time. Therefore, the data would look something like this:
subject | [math]\displaystyle{ Y_t(u) }[/math] | [math]\displaystyle{ Y_c(u) }[/math] | [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] |
---|---|---|---|
Joe | 130 | ? | ? |
Question marks are responses that could not be observed. The Fundamental Problem of Causal Inference[2] is that directly observing causal effects is impossible. However, this does not make causal inference impossible. Certain techniques and assumptions allow the fundamental problem to be overcome.
Assume that we have the following data:
subject | [math]\displaystyle{ Y_t(u) }[/math] | [math]\displaystyle{ Y_c(u) }[/math] | [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] |
---|---|---|---|
Joe | 130 | ? | ? |
Mary | ? | 125 | ? |
Sally | 100 | ? | ? |
Bob | ? | 130 | ? |
James | ? | 120 | ? |
MEAN | 115 | 125 | −10 |
We can infer what Joe's potential outcome under control would have been if we make an assumption of constant effect:
- [math]\displaystyle{ Y_t(u) = T+Y_c(u) }[/math]
and
- [math]\displaystyle{ Y_t(u) - T = Y_c(u). }[/math]
Where T is the average treatment effect.. in this case -10.
If we wanted to infer the unobserved values we could assume a constant effect. The following tables illustrates data consistent with the assumption of a constant effect.
subject | [math]\displaystyle{ Y_t(u) }[/math] | [math]\displaystyle{ Y_c(u) }[/math] | [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] |
---|---|---|---|
Joe | 130 | 140 | −10 |
Mary | 115 | 125 | −10 |
Sally | 100 | 110 | −10 |
Bob | 120 | 130 | −10 |
James | 110 | 120 | −10 |
MEAN | 115 | 125 | −10 |
All of the subjects have the same causal effect even though they have different outcomes under the treatment.
The assignment mechanism
The assignment mechanism, the method by which units are assigned treatment, affects the calculation of the average causal effect. One such assignment mechanism is randomization. For each subject we could flip a coin to determine if she receives treatment. If we wanted five subjects to receive treatment, we could assign treatment to the first five names we pick out of a hat. When we randomly assign treatments we may get different answers.
Assume that this data is the truth:
subject | [math]\displaystyle{ Y_t(u) }[/math] | [math]\displaystyle{ Y_c(u) }[/math] | [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] |
---|---|---|---|
Joe | 130 | 115 | 15 |
Mary | 120 | 125 | −5 |
Sally | 100 | 125 | −25 |
Bob | 110 | 130 | −20 |
James | 115 | 120 | −5 |
MEAN | 115 | 123 | −8 |
The true average causal effect is −8. But the causal effect for these individuals is never equal to this average. The causal effect varies, as it generally (always?) does in real life. After assigning treatments randomly, we might estimate the causal effect as:
subject | [math]\displaystyle{ Y_t(u) }[/math] | [math]\displaystyle{ Y_c(u) }[/math] | [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] |
---|---|---|---|
Joe | 130 | ? | ? |
Mary | 120 | ? | ? |
Sally | ? | 125 | ? |
Bob | ? | 130 | ? |
James | 115 | ? | ? |
MEAN | 121.66 | 127.5 | −5.83 |
A different random assignment of treatments yields a different estimate of the average causal effect.
subject | [math]\displaystyle{ Y_t(u) }[/math] | [math]\displaystyle{ Y_c(u) }[/math] | [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] |
---|---|---|---|
Joe | 130 | ? | ? |
Mary | 120 | ? | ? |
Sally | 100 | ? | ? |
Bob | ? | 130 | ? |
James | ? | 120 | ? |
MEAN | 116.67 | 125 | −8.33 |
The average causal effect varies because our sample is small and the responses have a large variance. If the sample were larger and the variance were less, the average causal effect would be closer to the true average causal effect regardless of the specific units randomly assigned to treatment.
Alternatively, suppose the mechanism assigns the treatment to all men and only to them.
subject | [math]\displaystyle{ Y_t(u) }[/math] | [math]\displaystyle{ Y_c(u) }[/math] | [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] |
---|---|---|---|
Joe | 130 | ? | ? |
Bob | 110 | ? | ? |
James | 105 | ? | ? |
Mary | ? | 130 | ? |
Sally | ? | 125 | ? |
Laila | ? | 135 | ? |
MEAN | 115 | 130 | −15 |
Under this assignment mechanism, it is impossible for women to receive treatment and therefore impossible to determine the average causal effect on female subjects. In order to make any inferences of causal effect on a subject, the probability that the subject receive treatment must be greater than 0 and less than 1.
The perfect doctor
Consider the use of the perfect doctor as an assignment mechanism. The perfect doctor knows how each subject will respond to the drug or the control and assigns each subject to the treatment that will most benefit her. The perfect doctor knows this information about a sample of patients:
subject | [math]\displaystyle{ Y_t(u) }[/math] | [math]\displaystyle{ Y_c(u) }[/math] | [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] |
---|---|---|---|
Joe | 130 | 115 | 15 |
Bob | 120 | 125 | −5 |
James | 100 | 150 | −50 |
Mary | 115 | 125 | −10 |
Sally | 120 | 130 | −10 |
Laila | 135 | 105 | 30 |
MEAN | 120 | 125 | −5 |
Based on this knowledge she would make the following treatment assignments:
subject | [math]\displaystyle{ Y_t(u) }[/math] | [math]\displaystyle{ Y_c(u) }[/math] | [math]\displaystyle{ Y_t(u) - Y_c(u) }[/math] |
---|---|---|---|
Joe | ? | 115 | ? |
Bob | 120 | ? | ? |
James | 100 | ? | ? |
Mary | 115 | ? | ? |
Sally | 120 | ? | ? |
Laila | ? | 105 | ? |
MEAN | 113.75 | 110 | 3.75 |
The perfect doctor distorts both averages by filtering out poor responses to both the treatment and control. The difference between means, which is the supposed average causal effect, is distorted in a direction that depends on the details. For instance, a subject like Laila who is harmed by taking the drug would be assigned to the control group by the perfect doctor and thus the negative effect of the drug would be masked.
Conclusion
The causal effect of a treatment on a single unit at a point in time is the difference between the outcome variable with the treatment and without the treatment. The Fundamental Problem of Causal Inference is that it is impossible to observe the causal effect on a single unit. You either take the aspirin now or you don't. As a consequence, assumptions must be made in order to estimate the missing counterfactuals.
The Rubin causal model has also been connected to instrumental variables (Angrist, Imbens, and Rubin, 1996)[6] and other techniques for causal inference. For more on the connections between the Rubin causal model, structural equation modeling, and other statistical methods for causal inference, see Morgan and Winship (2007)[7] and Pearl (2000).[8] Pearl (2000) argues that all potential outcomes can be derived from Structural Equation Models (SEMs) thus unifying econometrics and modern causal analysis.
See also
References
- ↑ 1.0 1.1 Sekhon, Jasjeet (2007). "The Neyman–Rubin Model of Causal Inference and Estimation via Matching Methods". The Oxford Handbook of Political Methodology. http://sekhon.berkeley.edu/papers/SekhonOxfordHandbook.pdf.
- ↑ 2.0 2.1 2.2 Holland, Paul W. (1986). "Statistics and Causal Inference". J. Amer. Statist. Assoc. 81 (396): 945–960. doi:10.1080/01621459.1986.10478354.
- ↑ Neyman, Jerzy. Sur les applications de la theorie des probabilites aux experiences agricoles: Essai des principes. Master's Thesis (1923). Excerpts reprinted in English, Statistical Science, Vol. 5, pp. 463–472. (D. M. Dabrowska, and T. P. Speed, Translators.)
- ↑ Rubin, Donald (2005). "Causal Inference Using Potential Outcomes". J. Amer. Statist. Assoc. 100 (469): 322–331. doi:10.1198/016214504000001880.
- ↑ 5.0 5.1 Rubin, Donald (1974). "Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies". J. Educ. Psychol. 66 (5): 688–701 [p. 689]. doi:10.1037/h0037350.
- ↑ Angrist, J.; Imbens, G.; Rubin, D. (1996). "Identification of Causal effects Using Instrumental Variables". J. Amer. Statist. Assoc. 91 (434): 444–455. doi:10.1080/01621459.1996.10476902. http://www.nber.org/papers/t0136.pdf.
- ↑ Morgan, S.; Winship, C. (2007). Counterfactuals and Causal Inference: Methods and Principles for Social Research. New York: Cambridge University Press. ISBN 978-0-521-67193-4.
- ↑ Pearl, Judea (2000). Causality: Models, Reasoning, and Inference (2nd, 2009 ed.). Cambridge University Press.
Further reading
- Guido Imbens & Donald Rubin (2015). Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction. Cambridge: Cambridge University Press. doi:10.1017/CBO9781139025751
- Donald Rubin (1977). "Assignment to Treatment Group on the Basis of a Covariate", Journal of Educational Statistics, 2, pp. 1–26.
- Rubin, Donald (1978). "Bayesian Inference for Causal Effects: The Role of Randomization", The Annals of Statistics, 6, pp. 34–58.
External links
- "Rubin Causal Model": an article for the New Palgrave Dictionary of Economics by Guido Imbens and Donald Rubin.
- "Counterfactual Causal Analysis": a webpage maintained by Stephen Morgan, Christopher Winship, and others with links to many research articles on causal inference.
Original source: https://en.wikipedia.org/wiki/Rubin causal model.
Read more |