Set identification

From HandWiki
Revision as of 20:50, 6 February 2024 by Steve2012 (talk | contribs) (fix)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In statistics and econometrics, set identification (or partial identification) extends the concept of identifiability (or "point identification") in statistical models to environments where the model and the distribution of observable variables are not sufficient to determine a unique value for the model parameters, but instead constrain the parameters to lie in a strict subset of the parameter space. Statistical models that are set (or partially) identified arise in a variety of settings in economics, including game theory and the Rubin causal model. Unlike approaches that deliver point-identification of the model parameters, methods from the literature on partial identification are used to obtain set estimates that are valid under weaker modelling assumptions.[1]

History

Early works containing the main ideas of set identification included (Frisch 1934) and (Marschak Andrews). However, the methods were significantly developed and promoted by Charles Manski, beginning with (Manski 1989) and (Manski 1990).

Partial identification continues to be a major theme in research in econometrics. (Powell 2017) named partial identification as an example of theoretical progress in the econometrics literature, and (Bonhomme Shaikh) list partial identification as “one of the most prominent recent themes in econometrics.”

Definition

Let [math]\displaystyle{ U \in \mathcal{U} \subseteq \mathbb{R}^{d_{u}} }[/math] denote a vector of latent variables, let [math]\displaystyle{ Z \in \mathcal{Z} \subseteq \mathbb{R}^{d_{z}} }[/math] denote a vector of observed (possibly endogenous) explanatory variables, and let [math]\displaystyle{ Y \in \mathcal{Y} \subseteq \mathbb{R}^{d_{y}} }[/math] denote a vector of observed endogenous outcome variables. A structure is a pair [math]\displaystyle{ s= (h,\mathcal{P}_{U\mid Z}) }[/math], where [math]\displaystyle{ \mathcal{P}_{U\mid Z} }[/math] represents a collection of conditional distributions, and [math]\displaystyle{ h }[/math] is a structural function such that [math]\displaystyle{ h(y,z,u) = 0 }[/math] for all realizations [math]\displaystyle{ (y,z,u) }[/math] of the random vectors [math]\displaystyle{ (Y,Z,U) }[/math]. A model is a collection of admissible (i.e. possible) structures [math]\displaystyle{ s }[/math].[2][3]

Let [math]\displaystyle{ \mathcal{P}_{Y\mid Z}(s) }[/math] denote the collection of conditional distributions of [math]\displaystyle{ Y \mid Z }[/math] consistent with the structure [math]\displaystyle{ s }[/math]. The admissible structures [math]\displaystyle{ s }[/math] and [math]\displaystyle{ s' }[/math] are said to be observationally equivalent if [math]\displaystyle{ \mathcal{P}_{Y\mid Z}(s) = \mathcal{P}_{Y\mid Z}(s') }[/math].[2][3] Let [math]\displaystyle{ s^\star }[/math] denotes the true (i.e. data-generating) structure. The model is said to be point-identified if for every [math]\displaystyle{ s \neq s' }[/math] we have [math]\displaystyle{ \mathcal{P}_{Y\mid Z}(s) \neq \mathcal{P}_{Y\mid Z}(s^\star) }[/math]. More generally, the model is said to be set (or partially) identified if there exists at least one admissible [math]\displaystyle{ s\neq s^\star }[/math] such that [math]\displaystyle{ \mathcal{P}_{Y\mid Z}(s)\neq \mathcal{P}_{Y\mid Z}(s^\star) }[/math]. The identified set of structures is the collection of admissible structures that are observationally equivalent to [math]\displaystyle{ s^\star }[/math].[4]

In most cases the definition can be substantially simplified. In particular, when [math]\displaystyle{ U }[/math] is independent of [math]\displaystyle{ Z }[/math] and has a known (up to some finite-dimensional parameter) distribution, and when [math]\displaystyle{ h }[/math] is known up to some finite-dimensional vector of parameters, each structure [math]\displaystyle{ s }[/math] can be characterized by a finite-dimensional parameter vector [math]\displaystyle{ \theta \in \Theta \subset \mathbb{R}^{d_{\theta}} }[/math]. If [math]\displaystyle{ \theta_0 }[/math] denotes the true (i.e. data-generating) vector of parameters, then the identified set, often denoted as [math]\displaystyle{ \Theta_{I} \subset \Theta }[/math], is the set of parameter values that are observationally equivalent to [math]\displaystyle{ \theta_0 }[/math].[4]

Example: missing data

This example is due to (Tamer 2010). Suppose there are two binary random variables, Y and Z. The econometrician is interested in [math]\displaystyle{ \mathrm P(Y = 1) }[/math]. There is a missing data problem, however: Y can only be observed if [math]\displaystyle{ Z = 1 }[/math].

By the law of total probability,

[math]\displaystyle{ \mathrm P(Y = 1) = \mathrm P(Y = 1 \mid Z = 1) \mathrm P(Z = 1) + \mathrm P(Y = 1 \mid Z = 0) \mathrm P(Z = 0). }[/math]

The only unknown object is [math]\displaystyle{ \mathrm P(Y = 1 \mid Z = 0) }[/math], which is constrained to lie between 0 and 1. Therefore, the identified set is

[math]\displaystyle{ \Theta_I = \{ p \in [0, 1] : p = \mathrm P(Y = 1 \mid Z = 1) \mathrm P(Z = 1) + q \mathrm P(Z = 0), \text{ for some } q \in [0,1]\}. }[/math]

Given the missing data constraint, the econometrician can only say that [math]\displaystyle{ \mathrm P(Y = 1) \in \Theta_I }[/math]. This makes use of all available information.

Statistical inference

Set estimation cannot rely on the usual tools for statistical inference developed for point estimation. A literature in statistics and econometrics studies methods for statistical inference in the context of set-identified models, focusing on constructing confidence intervals or confidence regions with appropriate properties. For example, a method developed by (Chernozhukov Hong) constructs confidence regions that cover the identified set with a given probability.

Notes

References

Further reading