Dynamic discrete choice

From HandWiki

Dynamic discrete choice (DDC) models, also known as discrete choice models of dynamic programming, model an agent's choices over discrete options that have future implications. Rather than assuming observed choices are the result of static utility maximization, observed choices in DDC models are assumed to result from an agent's maximization of the present value of utility, generalizing the utility theory upon which discrete choice models are based.[1] The goal of DDC methods is to estimate the structural parameters of the agent's decision process. Once these parameters are known, the researcher can then use the estimates to simulate how the agent would behave in a counterfactual state of the world. (For example, how a prospective college student's enrollment decision would change in response to a tuition increase.)

Mathematical representation

Agent [math]\displaystyle{ n }[/math]'s maximization problem can be written mathematically as follows:

[math]\displaystyle{ V\left(x_{n0}\right)=\max_{\left\{d_{nt}\right\}_{t=1}^T} \mathbb{E} \left(\sum_{t^{\prime}=t}^T \sum_{i=1}^J \beta^{t'-t} \left(d_{nt}=i\right)U_{nit} \left(x_{nt}, \varepsilon_{nit}\right)\right), }[/math]

where

  • [math]\displaystyle{ x_{nt} }[/math] are state variables, with [math]\displaystyle{ x_{n0} }[/math] the agent's initial condition
  • [math]\displaystyle{ d_{nt} }[/math] represents [math]\displaystyle{ n }[/math]'s decision from among [math]\displaystyle{ J }[/math] discrete alternatives
  • [math]\displaystyle{ \beta \in \left(0,1\right) }[/math] is the discount factor
  • [math]\displaystyle{ U_{nit} }[/math] is the flow utility [math]\displaystyle{ n }[/math] receives from choosing alternative [math]\displaystyle{ i }[/math] in period [math]\displaystyle{ t }[/math], and depends on both the state [math]\displaystyle{ x_{nt} }[/math] and unobserved factors [math]\displaystyle{ \varepsilon_{nit} }[/math]
  • [math]\displaystyle{ T }[/math] is the time horizon
  • The expectation [math]\displaystyle{ \mathbb{E}\left(\cdot\right) }[/math] is taken over both the [math]\displaystyle{ x_{nt} }[/math]'s and [math]\displaystyle{ \varepsilon_{nit} }[/math]'s in [math]\displaystyle{ U_{nit} }[/math]. That is, the agent is uncertain about future transitions in the states, and is also uncertain about future realizations of unobserved factors.

Simplifying assumptions and notation

It is standard to impose the following simplifying assumptions and notation of the dynamic decision problem:

1. Flow utility is additively separable and linear in parameters

The flow utility can be written as an additive sum, consisting of deterministic and stochastic elements. The deterministic component can be written as a linear function of the structural parameters.

[math]\displaystyle{ \begin{alignat}{5} U_{nit}\left(x_{nt},\varepsilon_{nit}\right) &&\; = \;&& u_{nit} &&\; + \;&& \varepsilon_{nit} \\ &&\; = \;&& X_{nt}\alpha_{i} &&\; + \;&& \varepsilon_{nit} \end{alignat} }[/math]
2. The optimization problem can be written as a Bellman equation

Define by [math]\displaystyle{ V_{nt}(x_{nt}) }[/math] the ex ante value function for individual [math]\displaystyle{ n }[/math] in period [math]\displaystyle{ t }[/math] just before [math]\displaystyle{ \varepsilon_{nt} }[/math] is revealed:

[math]\displaystyle{ V_{nt}(x_{nt}) = \mathbb{E} \max_i \left\{ u_{nit}(x_{nt}) + \varepsilon_{nit} + \beta \int_{x_{t+1}} V_{nt+1} (x_{nt+1}) \, dF\left(x_{t+1} \mid x_t \right) \right\} }[/math]

where the expectation operator [math]\displaystyle{ \mathbb{E} }[/math] is over the [math]\displaystyle{ \varepsilon }[/math]'s, and where [math]\displaystyle{ dF\left(x_{t+1} \mid x_t \right) }[/math] represents the probability distribution over [math]\displaystyle{ x_{t+1} }[/math] conditional on [math]\displaystyle{ x_{t} }[/math]. The expectation over state transitions is accomplished by taking the integral over this probability distribution.

It is possible to decompose [math]\displaystyle{ V_{nt}(x_{nt}) }[/math] into deterministic and stochastic components:

[math]\displaystyle{ V_{nt}(x_{nt}) = \mathbb{E} \max_i \left\{ v_{nit}(x_{nt}) + \varepsilon_{nit} \right\} }[/math]

where [math]\displaystyle{ v_{nit} }[/math] is the value to choosing alternative [math]\displaystyle{ i }[/math] at time [math]\displaystyle{ t }[/math] and is written as

[math]\displaystyle{ v_{nit}(x_{nt}) = u_{nit}\left(x_{nt}\right) + \beta \int_{x_{t+1}} \mathbb{E} \max_{j} \left\{ v_{njt+1}(x_{nt+1}) + \varepsilon_{njt+1} \right\} \, dF(x_{t+1} \mid x_t) }[/math]

where now the expectation [math]\displaystyle{ \mathbb{E} }[/math] is taken over the [math]\displaystyle{ \varepsilon_{njt+1} }[/math].

3. The optimization problem follows a Markov decision process

The states [math]\displaystyle{ x_{t} }[/math] follow a Markov chain. That is, attainment of state [math]\displaystyle{ x_{t} }[/math] depends only on the state [math]\displaystyle{ x_{t-1} }[/math] and not [math]\displaystyle{ x_{t-2} }[/math] or any prior state.

Conditional value functions and choice probabilities

The value function in the previous section is called the conditional value function, because it is the value function conditional on choosing alternative [math]\displaystyle{ i }[/math] in period [math]\displaystyle{ t }[/math]. Writing the conditional value function in this way is useful in constructing formulas for the choice probabilities.

To write down the choice probabilities, the researcher must make an assumption about the distribution of the [math]\displaystyle{ \varepsilon_{nit} }[/math]'s. As in static discrete choice models, this distribution can be assumed to be iid Type I extreme value, generalized extreme value, multinomial probit, or mixed logit.

For the case where [math]\displaystyle{ \varepsilon_{nit} }[/math] is multinomial logit (i.e. drawn iid from the Type I extreme value distribution), the formulas for the choice probabilities would be:

[math]\displaystyle{ P_{nit} = \frac{\exp(v_{nit})}{\sum_{j=1}^J \exp(v_{njt})} }[/math]

Estimation

Estimation of dynamic discrete choice models is particularly challenging, due to the fact that the researcher must solve the backwards recursion problem for each guess of the structural parameters.

The most common methods used to estimate the structural parameters are maximum likelihood estimation and method of simulated moments.

Aside from estimation methods, there are also solution methods. Different solution methods can be employed due to complexity of the problem. These can be divided into full-solution methods and non-solution methods.

Full-solution methods

The foremost example of a full-solution method is the nested fixed point (NFXP) algorithm developed by John Rust in 1987.[2] The NFXP algorithm is described in great detail in its documentation manual.[3]

A recent work by Che-Lin Su and Kenneth Judd in 2012[4] implements another approach (dismissed as intractable by Rust in 1987), which uses constrained optimization of the likelihood function, a special case of mathematical programming with equilibrium constraints (MPEC). Specifically, the likelihood function is maximized subject to the constraints imposed by the model, and expressed in terms of the additional variables that describe the model's structure. This approach requires powerful optimization software such as Artelys Knitro because of the high dimensionality of the optimization problem. Once it is solved, both the structural parameters that maximize the likelihood, and the solution of the model are found.

In the later article[5] Rust and coauthors show that the speed advantage of MPEC compared to NFXP is not significant. Yet, because the computations required by MPEC do not rely on the structure of the model, its implementation is much less labor intensive.

Despite numerous contenders, the NFXP maximum likelihood estimator remains the leading estimation method for Markov decision models.[5]

Non-solution methods

An alternative to full-solution methods is non-solution methods. In this case, the researcher can estimate the structural parameters without having to fully solve the backwards recursion problem for each parameter guess. Non-solution methods are typically faster while requiring more assumptions, but the additional assumptions are in many cases realistic.

The leading non-solution method is conditional choice probabilities, developed by V. Joseph Hotz and Robert A. Miller.[6]

Examples

Bus engine replacement model

The bus engine replacement model developed in the seminal paper (Rust 1987) is one of the first dynamic stochastic models of discrete choice estimated using real data, and continues to serve as classical example of the problems of this type.[4]

The model is a simple regenerative optimal stopping stochastic dynamic problem faced by the decision maker, Harold Zurcher, superintendent of maintenance at the Madison Metropolitan Bus Company in Madison, Wisconsin. For every bus in operation in each time period Harold Zurcher has to decide whether to replace the engine and bear the associated replacement cost, or to continue operating the bus at an ever raising cost of operation, which includes insurance and the cost of lost ridership in the case of a breakdown.

Let [math]\displaystyle{ x_t }[/math] denote the odometer reading (mileage) at period [math]\displaystyle{ t }[/math], [math]\displaystyle{ c(x_t,\theta) }[/math] cost of operating the bus which depends on the vector of parameters [math]\displaystyle{ \theta }[/math], [math]\displaystyle{ RC }[/math] cost of replacing the engine, and [math]\displaystyle{ \beta }[/math] the discount factor. Then the per-period utility is given by

[math]\displaystyle{ U(x_t,\xi_t,d,\theta)= \begin{cases} -c(x_t,\theta) + \xi_{t,\text{keep}}, & \\ -RC-c(0,\theta) + \xi_{t,\text{replace}}, & \end{cases} = u(x_t,d,\theta) + \begin{cases} \xi_{t,\text{keep}}, & \textrm{if }\;\; d=\text{keep}, \\ \xi_{t,\text{replace}}, & \textrm{if }\;\; d=\text{replace}, \end{cases} }[/math]

where [math]\displaystyle{ d }[/math] denotes the decision (keep or replace) and [math]\displaystyle{ \xi_{t,\text{keep}} }[/math] and [math]\displaystyle{ \xi_{t,\text{replace}} }[/math] represent the component of the utility observed by Harold Zurcher, but not John Rust. It is assumed that [math]\displaystyle{ \xi_{t,\text{keep}} }[/math] and [math]\displaystyle{ \xi_{t,\text{replace}} }[/math] are independent and identically distributed with the Type I extreme value distribution, and that [math]\displaystyle{ \xi_{t,\bullet} }[/math] are independent of [math]\displaystyle{ \xi_{t-1,\bullet} }[/math] conditional on [math]\displaystyle{ x_t }[/math].

Then the optimal decisions satisfy the Bellman equation

[math]\displaystyle{ V(x,\xi,\theta) = \max_{d=\text{keep},\text{replace}} \left\{ u(x,d,\theta)+\xi_d + \iint V(x',\xi',\theta) q(d\xi'\mid x',\theta) p(dx'\mid x,d,\theta) \right\} }[/math]

where [math]\displaystyle{ p(dx'\mid x,d,\theta) }[/math] and [math]\displaystyle{ q(d\xi'\mid x',\theta) }[/math] are respectively transition densities for the observed and unobserved states variables. Time indices in the Bellman equation are dropped because the model is formulated in the infinite horizon settings, the unknown optimal policy is stationary, i.e. independent of time.

Given the distributional assumption on [math]\displaystyle{ q(d\xi'\mid x',\theta) }[/math], the probability of particular choice [math]\displaystyle{ d }[/math] is given by

[math]\displaystyle{ P(d\mid x,\theta) = \frac{ \exp\{ u(x,d,\theta)+\beta EV(x,d,\theta)\}}{\sum_{d' \in D(x)} \exp\{ u(x,d',\theta)+\beta EV(x,d',\theta)\} } }[/math]

where [math]\displaystyle{ EV(x,d,\theta) }[/math] is a unique solution to the functional equation

[math]\displaystyle{ EV(x,d,\theta)= \int \left[ \log\left( \sum_{d=\text{keep},\text{replace}} \exp\{u(x,d',\theta)+\beta EV(x',d',\theta)\}\right) \right] p(x'\mid x,d,\theta). }[/math]

It can be shown that the latter functional equation defines a contraction mapping if the state space [math]\displaystyle{ x_t }[/math] is bounded, so there will be a unique solution [math]\displaystyle{ EV(x,d,\theta) }[/math] for any [math]\displaystyle{ \theta }[/math], and further the implicit function theorem holds, so [math]\displaystyle{ EV(x,d,\theta) }[/math] is also a smooth function of [math]\displaystyle{ \theta }[/math] for each [math]\displaystyle{ (x,d) }[/math].

Estimation with nested fixed point algorithm

The contraction mapping above can be solved numerically for the fixed point [math]\displaystyle{ EV(x,d,\theta) }[/math] that yields choice probabilities [math]\displaystyle{ P(d\mid x,\theta) }[/math] for any given value of [math]\displaystyle{ \theta }[/math]. The log-likelihood function can then be formulated as

[math]\displaystyle{ L(\theta) = \sum_{i=1}^N \sum_{t=1}^{T_i} \log(P(d_{it}\mid x_{it},\theta))+\log(p(x_{it}\mid x_{it-1},d_{it-1},\theta)), }[/math]

where [math]\displaystyle{ x_{i,t} }[/math] and [math]\displaystyle{ d_{i,t} }[/math] represent data on state variables (odometer readings) and decision (keep or replace) for [math]\displaystyle{ i=1,\dots,N }[/math] individual buses, each in [math]\displaystyle{ t=1,\dots,T_i }[/math] periods.

The joint algorithm for solving the fixed point problem given a particular value of parameter [math]\displaystyle{ \theta }[/math] and maximizing the log-likelihood [math]\displaystyle{ L(\theta) }[/math] with respect to [math]\displaystyle{ \theta }[/math] was named by John Rust nested fixed point algorithm (NFXP).

Rust's implementation of the nested fixed point algorithm is highly optimized for this problem, using Newton–Kantorovich iterations to calculate [math]\displaystyle{ P(d\mid x,\theta) }[/math] and quasi-Newton methods, such as the Berndt–Hall–Hall–Hausman algorithm, for likelihood maximization.[5]

Estimation with MPEC

In the nested fixed point algorithm, [math]\displaystyle{ P(d\mid x,\theta) }[/math] is recalculated for each guess of the parameters θ. The MPEC method instead solves the constrained optimization problem:[4]

[math]\displaystyle{ \begin{align} \max & \qquad L(\theta) & \\ \text{subject to} & \qquad EV(x,d,\theta)= \int \left[ \log\left( \sum_{d=\text{keep},\text{replace}} \exp\{ u(x,d',\theta) + \beta EV(x',d',\theta)\}\right) \right] p(x'\mid x,d,\theta) \end{align} }[/math]

This method is faster to compute than non-optimized implementations of the nested fixed point algorithm, and takes about as long as highly optimized implementations.[5]

Estimation with non-solution methods

The conditional choice probabilities method of Hotz and Miller can be applied in this setting. Hotz, Miller, Sanders, and Smith proposed a computationally simpler version of the method, and tested it on a study of the bus engine replacement problem. The method works by estimating conditional choice probabilities using simulation, then backing out the implied differences in value functions.[7][8]

See also

  • Inverse reinforcement learning

References

  1. Keane & Wolpin 2009.
  2. Rust 1987.
  3. Rust, John (2008). "Nested fixed point algorithm documentation manual". Unpublished. https://editorialexpress.com/jrust/nfxp.html. 
  4. 4.0 4.1 4.2 Su, Che-Lin; Judd, Kenneth L. (2012). "Constrained Optimization Approaches to Estimation of Structural Models". Econometrica 80 (5): 2213–2230. doi:10.3982/ECTA7925. ISSN 1468-0262. 
  5. 5.0 5.1 5.2 5.3 Iskhakov, Fedor; Lee, Jinhyuk; Rust, John; Schjerning, Bertel; Seo, Kyoungwon (2016). "Comment on "constrained optimization approaches to estimation of structural models"". Econometrica 84 (1): 365–370. doi:10.3982/ECTA12605. ISSN 0012-9682. https://curis.ku.dk/portal/da/publications/constrained-optimization-approaches-to-estimation-of-structural-models(99a64534-6f7f-44e8-b680-d392e1f90027).html. 
  6. Hotz, V. Joseph; Miller, Robert A. (1993). "Conditional Choice Probabilities and the Estimation of Dynamic Models". Review of Economic Studies 60 (3): 497–529. doi:10.2307/2298122. 
  7. Aguirregabiria & Mira 2010.
  8. Hotz, V. J.; Miller, R. A.; Sanders, S.; Smith, J. (1994-04-01). "A Simulation Estimator for Dynamic Models of Discrete Choice". The Review of Economic Studies (Oxford University Press (OUP)) 61 (2): 265–289. doi:10.2307/2297981. ISSN 0034-6527. 

Further reading