Ignorability

From HandWiki

In statistics, ignorability is a feature of an experiment design whereby the method of data collection (and the nature of missing data) does not depend on the missing data. A missing data mechanism such as a treatment assignment or survey sampling strategy is "ignorable" if the missing data matrix, which indicates which variables are observed or missing, is independent of the missing data conditional on the observed data. This idea is part of the Rubin Causal Inference Model, developed by Donald Rubin in collaboration with Paul Rosenbaum in the early 1970s. The exact definition differs between their articles in that period. In one of Rubins articles from 1978 Rubin discuss ignorable assignment mechanisms,[1] which can be understood as the way individuals are assigned to treatment groups is irrelevant for the data analysis, given everything that is recorded about that individual. Later, in 1983 [2] Rubin and Rosenbaum rather define strongly ignorable treatment assignment which is a stronger condition, mathematically formulated as [math]\displaystyle{ (r_1,r_0) \perp \!\!\!\perp z \mid v ,\quad 0\lt \operatorname{pr}(z=1)\lt 1 \quad \forall v }[/math], where [math]\displaystyle{ r_t }[/math] is a potential outcome given treatment [math]\displaystyle{ t }[/math], [math]\displaystyle{ v }[/math] is some covariates and [math]\displaystyle{ z }[/math] is the actual treatment.

Pearl[3] devised a simple graphical criterion, called back-door, that entails ignorability and identifies sets of covariates that achieve this condition.

Ignorability means we can ignore how one ended up in one vs. the other group (‘treated’ [math]\displaystyle{ Tx = 1 }[/math], or ‘control’ [math]\displaystyle{ Tx = 0 }[/math]) when it comes to the potential outcome (say [math]\displaystyle{ Y }[/math]). It has also been called unconfoundedness, selection on the observables, or no omitted variable bias.[4]

Formally it has been written as [math]\displaystyle{ [Y_i^1, Y_i^0] \perp Tx_i }[/math], or in words the potential [math]\displaystyle{ Y }[/math] outcome of person [math]\displaystyle{ i }[/math] had they been treated or not does not depend on whether they have really been (observable) treated or not. We can ignore in other words how people ended up in one vs. the other condition, and treat their potential outcomes as exchangeable. While this seems thick, it becomes clear if we add subscripts for the ‘realized’ and superscripts for the ‘ideal’ (potential) worlds (notation suggested by David Freedman. So: Y11/*Y01 are potential Y outcomes had the person been treated (superscript 1), when in reality they have actually been (Y11, subscript 1), or not (*Y01: the [math]\displaystyle{ ^* }[/math] signals this quantity can never be realized or observed, or is fully contrary-to-fact or counterfactual, CF).

Similarly, [math]\displaystyle{ ^*Y_1^0 / Y_0^0 }[/math] are potential [math]\displaystyle{ Y }[/math] outcomes had the person not been treated (superscript [math]\displaystyle{ ^0 }[/math]), when in reality they have been [math]\displaystyle{ ^*Y_1^0 }[/math], subscript [math]\displaystyle{ _1 }[/math] or not actually ([math]\displaystyle{ Y_0^0 }[/math].

Only one of each potential outcome (PO) can be realized, the other cannot, for the same assignment to condition, so when we try to estimate treatment effects, we need something to replace the fully contrary-to-fact ones with observables (or estimate them). When ignorability/exogeneity holds, like when people are randomized to be treated or not, we can ‘replace’ *Y01 with its observable counterpart Y11, and *Y10 with its observable counterpart Y00, not at the individual level Yi’s, but when it comes to averages like E[Yi1Yi0], which is exactly the causal treatment effect (TE) one tries to recover.

Because of the ‘consistency rule’, the potential outcomes are the values actually realized, so we can write Yi0 = Yi00 and Yi1 = Yi11 (“the consistency rule states that an individual’s potential outcome under a hypothetical condition that happened to materialize is precisely the outcome experienced by that individual”,[5] p. 872). Hence TE = E[Yi1 – Yi0] = E[Yi11 – Yi00]. Now, by simply adding and subtracting the same fully counterfactual quantity *Y10 we get: E[Yi11 – Yi00] = E[Yi11 –*Y10 +*Y10 - Yi00] = E[Yi11 –*Y10] + E[*Y10 - Yi00] = ATT + {Selection Bias}, where ATT = average treatment effect on the treated [6] and the second term is the bias introduced when people have the choice to belong to either the ‘treated’ or the ‘control’ group. Ignorability, either plain or conditional on some other variables, implies that such selection bias can be ignored, so one can recover (or estimate) the causal effect.

See also

  • Missing at random

References

  1. Rubin, Donald (1978). "Bayesian Inference for Causal Effects: The Role of Randomization". The Annals of Statistics 6 (1): 34–58. doi:10.1214/aos/1176344064. 
  2. Rubin, Donald B.; Rosenbaum, Paul R. (1983). "The Central Role of the Propensity Score in Observational Studies for Causal Effects". Biometrika 70 (1): 41–55. doi:10.2307/2335942. 
  3. Pearl, Judea (2000). Causality : models, reasoning, and inference. Cambridge, U.K.: Cambridge University Press. ISBN 978-0-521-89560-6. 
  4. Yamamoto, Teppei (2012). "Understanding the Past: Statistical Analysis of Causal Attribution". Journal of Political Science 56 (1): 237–256. doi:10.1111/j.1540-5907.2011.00539.x. 
  5. Pearl, Judea (2010). "On the consistency rule in causal inference: axiom, definition, assumption, or theorem?". Epidemiology 21 (6): 872–875. doi:10.1097/EDE.0b013e3181f5d3fd. PMID 20864888. 
  6. Imai, Kosuke (2006). "Misunderstandings between experimentalists and observationalists about causal inference". Journal of the Royal Statistical Society, Series A (Statistics in Society) 171 (2): 481–502. doi:10.1111/j.1467-985X.2007.00527.x. http://nrs.harvard.edu/urn-3:HUL.InstRepos:4142695. 

Further reading

  • Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Rubin, Donald B. (2004). Bayesian Data Analysis. New York: Chapman & Hall/CRC. 
  • Jaeger, Manfred (2011). "Ignorability in Statistical and Probabilistic Inference". Journal of Artificial Intelligence Research 24: 889–917. doi:10.1613/jair.1657. Bibcode2011arXiv1109.2143J.