Optimal stopping
In mathematics, the theory of optimal stopping[1][2] or early stopping[3] is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance (related to the pricing of American options). A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.
Definition
Discrete time case
Stopping rule problems are associated with two objects:
- A sequence of random variables [math]\displaystyle{ X_1, X_2, \ldots }[/math], whose joint distribution is something assumed to be known
- A sequence of 'reward' functions [math]\displaystyle{ (y_i)_{i\ge 1} }[/math] which depend on the observed values of the random variables in 1:
- [math]\displaystyle{ y_i=y_i (x_1, \ldots ,x_i) }[/math]
Given those objects, the problem is as follows:
- You are observing the sequence of random variables, and at each step [math]\displaystyle{ i }[/math], you can choose to either stop observing or continue
- If you stop observing at step [math]\displaystyle{ i }[/math], you will receive reward [math]\displaystyle{ y_i }[/math]
- You want to choose a stopping rule to maximize your expected reward (or equivalently, minimize your expected loss)
Continuous time case
Consider a gain process [math]\displaystyle{ G=(G_t)_{t\ge 0} }[/math] defined on a filtered probability space [math]\displaystyle{ (\Omega,\mathcal{F},(\mathcal{F}_t)_{t\ge 0},\mathbb{P}) }[/math] and assume that [math]\displaystyle{ G }[/math] is adapted to the filtration. The optimal stopping problem is to find the stopping time [math]\displaystyle{ \tau^* }[/math] which maximizes the expected gain
- [math]\displaystyle{ V_t^T = \mathbb{E} G_{\tau^*} = \sup_{t\le \tau \le T} \mathbb{E} G_\tau }[/math]
where [math]\displaystyle{ V_t^T }[/math] is called the value function. Here [math]\displaystyle{ T }[/math] can take value [math]\displaystyle{ \infty }[/math].
A more specific formulation is as follows. We consider an adapted strong Markov process [math]\displaystyle{ X = (X_t)_{t\ge 0} }[/math] defined on a filtered probability space [math]\displaystyle{ (\Omega,\mathcal{F},(\mathcal{F}_t)_{t\ge 0},\mathbb{P}_x) }[/math] where [math]\displaystyle{ \mathbb{P}_x }[/math] denotes the probability measure where the stochastic process starts at [math]\displaystyle{ x }[/math]. Given continuous functions [math]\displaystyle{ M,L }[/math], and [math]\displaystyle{ K }[/math], the optimal stopping problem is
- [math]\displaystyle{ V(x) = \sup_{0\le \tau \le T} \mathbb{E}_x \left( M(X_\tau) + \int_0^\tau L(X_t) dt + \sup_{0\le t\le\tau} K(X_t) \right). }[/math]
This is sometimes called the MLS (which stand for Mayer, Lagrange, and supremum, respectively) formulation.[4]
Solution methods
There are generally two approaches to solving optimal stopping problems.[4] When the underlying process (or the gain process) is described by its unconditional finite-dimensional distributions, the appropriate solution technique is the martingale approach, so called because it uses martingale theory, the most important concept being the Snell envelope. In the discrete time case, if the planning horizon [math]\displaystyle{ T }[/math] is finite, the problem can also be easily solved by dynamic programming.
When the underlying process is determined by a family of (conditional) transition functions leading to a Markov family of transition probabilities, powerful analytical tools provided by the theory of Markov processes can often be utilized and this approach is referred to as the Markov method. The solution is usually obtained by solving the associated free-boundary problems (Stefan problems).
A jump diffusion result
Let [math]\displaystyle{ Y_t }[/math] be a Lévy diffusion in [math]\displaystyle{ \mathbb{R}^k }[/math] given by the SDE
- [math]\displaystyle{ dY_t = b(Y_t) dt + \sigma (Y_t) dB_t + \int_{\mathbb{R}^k} \gamma (Y_{t-},z)\bar{N}(dt,dz),\quad Y_0 = y }[/math]
where [math]\displaystyle{ B }[/math] is an [math]\displaystyle{ m }[/math]-dimensional Brownian motion, [math]\displaystyle{ \bar{N} }[/math] is an [math]\displaystyle{ l }[/math]-dimensional compensated Poisson random measure, [math]\displaystyle{ b:\mathbb{R}^k \to \mathbb{R}^k }[/math], [math]\displaystyle{ \sigma:\mathbb{R}^k \to \mathbb{R}^{k\times m} }[/math], and [math]\displaystyle{ \gamma:\mathbb{R}^k \times \mathbb{R}^k \to \mathbb{R}^{k\times l} }[/math] are given functions such that a unique solution [math]\displaystyle{ (Y_t) }[/math] exists. Let [math]\displaystyle{ \mathcal{S}\subset \mathbb{R}^k }[/math] be an open set (the solvency region) and
- [math]\displaystyle{ \tau_\mathcal{S} = \inf\{ t\gt 0: Y_t \notin \mathcal{S} \} }[/math]
be the bankruptcy time. The optimal stopping problem is:
- [math]\displaystyle{ V(y) = \sup_{\tau \le \tau_\mathcal{S}} J^\tau (y) = \sup_{\tau \le \tau_\mathcal{S}} \mathbb{E}_y \left[ M(Y_\tau) + \int_0^\tau L(Y_t) dt \right]. }[/math]
It turns out that under some regularity conditions,[5] the following verification theorem holds:
If a function [math]\displaystyle{ \phi:\bar{\mathcal{S}}\to \mathbb{R} }[/math] satisfies
- [math]\displaystyle{ \phi \in C(\bar{\mathcal{S}}) \cap C^1(\mathcal{S}) \cap C^2(\mathcal{S}\setminus \partial D) }[/math] where the continuation region is [math]\displaystyle{ D = \{y\in\mathcal{S}: \phi(y) \gt M(y) \} }[/math],
- [math]\displaystyle{ \phi \ge M }[/math] on [math]\displaystyle{ \mathcal{S} }[/math], and
- [math]\displaystyle{ \mathcal{A}\phi + L \le 0 }[/math] on [math]\displaystyle{ \mathcal{S} \setminus \partial D }[/math], where [math]\displaystyle{ \mathcal{A} }[/math] is the infinitesimal generator of [math]\displaystyle{ (Y_t) }[/math]
then [math]\displaystyle{ \phi(y) \ge V(y) }[/math] for all [math]\displaystyle{ y\in \bar{\mathcal{S}} }[/math]. Moreover, if
- [math]\displaystyle{ \mathcal{A}\phi + L = 0 }[/math] on [math]\displaystyle{ D }[/math]
Then [math]\displaystyle{ \phi(y) = V(y) }[/math] for all [math]\displaystyle{ y\in \bar{\mathcal{S}} }[/math] and [math]\displaystyle{ \tau^* = \inf\{ t\gt 0: Y_t\notin D\} }[/math] is an optimal stopping time.
These conditions can also be written is a more compact form (the integro-variational inequality):
- [math]\displaystyle{ \max\left\{ \mathcal{A}\phi + L, M-\phi \right\} = 0 }[/math] on [math]\displaystyle{ \mathcal{S} \setminus \partial D. }[/math]
Examples
Coin tossing
(Example where [math]\displaystyle{ \mathbb{E}(y_i) }[/math] converges)
You have a fair coin and are repeatedly tossing it. Each time, before it is tossed, you can choose to stop tossing it and get paid (in dollars, say) the average number of heads observed.
You wish to maximise the amount you get paid by choosing a stopping rule. If Xi (for i ≥ 1) forms a sequence of independent, identically distributed random variables with Bernoulli distribution
- [math]\displaystyle{ \text{Bern}\left(\frac{1}{2}\right), }[/math]
and if
- [math]\displaystyle{ y_i = \frac 1 i \sum_{k=1}^{i} X_k }[/math]
then the sequences [math]\displaystyle{ (X_i)_{i\geq 1} }[/math], and [math]\displaystyle{ (y_i)_{i\geq 1} }[/math] are the objects associated with this problem.
House selling
(Example where [math]\displaystyle{ \mathbb{E}(y_i) }[/math] does not necessarily converge)
You have a house and wish to sell it. Each day you are offered [math]\displaystyle{ X_n }[/math] for your house, and pay [math]\displaystyle{ k }[/math] to continue advertising it. If you sell your house on day [math]\displaystyle{ n }[/math], you will earn [math]\displaystyle{ y_n }[/math], where [math]\displaystyle{ y_n = (X_n - nk) }[/math].
You wish to maximise the amount you earn by choosing a stopping rule.
In this example, the sequence ([math]\displaystyle{ X_i }[/math]) is the sequence of offers for your house, and the sequence of reward functions is how much you will earn.
Secretary problem
(Example where [math]\displaystyle{ (X_i) }[/math] is a finite sequence)
You are observing a sequence of objects which can be ranked from best to worst. You wish to choose a stopping rule which maximises your chance of picking the best object.
Here, if [math]\displaystyle{ R_1, \ldots, R_n }[/math] (n is some large number) are the ranks of the objects, and [math]\displaystyle{ y_i }[/math] is the chance you pick the best object if you stop intentionally rejecting objects at step i, then [math]\displaystyle{ (R_i) }[/math] and [math]\displaystyle{ (y_i) }[/math] are the sequences associated with this problem. This problem was solved in the early 1960s by several people. An elegant solution to the secretary problem and several modifications of this problem is provided by the more recent odds algorithm of optimal stopping (Bruss algorithm).
Search theory
Economists have studied a number of optimal stopping problems similar to the 'secretary problem', and typically call this type of analysis 'search theory'. Search theory has especially focused on a worker's search for a high-wage job, or a consumer's search for a low-priced good.
Parking problem
A special example of an application of search theory is the task of optimal selection of parking space by a driver going to the opera (theater, shopping, etc.). Approaching the destination, the driver goes down the street along which there are parking spaces – usually, only some places in the parking lot are free. The goal is clearly visible, so the distance from the target is easily assessed. The driver's task is to choose a free parking space as close to the destination as possible without turning around so that the distance from this place to the destination is the shortest.[6]
Option trading
In the trading of options on financial markets, the holder of an American option is allowed to exercise the right to buy (or sell) the underlying asset at a predetermined price at any time before or at the expiry date. Therefore, the valuation of American options is essentially an optimal stopping problem. Consider a classical Black–Scholes set-up and let [math]\displaystyle{ r }[/math] be the risk-free interest rate and [math]\displaystyle{ \delta }[/math] and [math]\displaystyle{ \sigma }[/math] be the dividend rate and volatility of the stock. The stock price [math]\displaystyle{ S }[/math] follows geometric Brownian motion
- [math]\displaystyle{ S_t = S_0 \exp\left\{ \left(r - \delta - \frac{\sigma^2}{2}\right) t + \sigma B_t \right\} }[/math]
under the risk-neutral measure.
When the option is perpetual, the optimal stopping problem is
- [math]\displaystyle{ V(x) = \sup_{\tau} \mathbb{E}_x \left[ e^{-r\tau} g(S_\tau) \right] }[/math]
where the payoff function is [math]\displaystyle{ g(x) = (x-K)^+ }[/math] for a call option and [math]\displaystyle{ g(x) = (K-x)^+ }[/math] for a put option. The variational inequality is
- [math]\displaystyle{ \max\left\{ \frac{1}{2} \sigma^2 x^2 V''(x) + (r-\delta) x V'(x) - rV(x), g(x) - V(x) \right\} = 0 }[/math]
for all [math]\displaystyle{ x \in (0,\infty)\setminus \{b\} }[/math] where [math]\displaystyle{ b }[/math] is the exercise boundary. The solution is known to be[7]
- (Perpetual call) [math]\displaystyle{ V(x) = \begin{cases} (b-K)(x/b)^\gamma & x\in(0,b) \\ x-K & x\in[b,\infty) \end{cases} }[/math] where [math]\displaystyle{ \gamma = (\sqrt{\nu^2 + 2r} - \nu) / \sigma }[/math] and [math]\displaystyle{ \nu = (r-\delta)/\sigma - \sigma / 2, \quad b = \gamma K / (\gamma - 1). }[/math]
- (Perpetual put) [math]\displaystyle{ V(x) = \begin{cases} K - x & x\in(0,c] \\(K-c)(x/c)^\tilde{\gamma} & x\in(c,\infty) \end{cases} }[/math] where [math]\displaystyle{ \tilde{\gamma} = -(\sqrt{\nu^2 + 2r} + \nu) / \sigma }[/math] and [math]\displaystyle{ \nu = (r-\delta)/\sigma - \sigma / 2, \quad c = \tilde{\gamma} K / (\tilde{\gamma} - 1). }[/math]
On the other hand, when the expiry date is finite, the problem is associated with a 2-dimensional free-boundary problem with no known closed-form solution. Various numerical methods can, however, be used. See Black–Scholes model for various valuation methods here, as well as Fugit for a discrete, tree based, calculation of the optimal time to exercise.
See also
- Halting problem
- Markov decision process
- Optional stopping theorem
- Prophet inequality
- Stochastic control
References
Citations
- ↑ Chow, Y.S.; Robbins, H.; Siegmund, D. (1971). Great Expectations: The Theory of Optimal Stopping. Boston: Houghton Mifflin.
- ↑ Ferguson, Thomas S. (2007). Optimal Stopping and Applications. UCLA. https://www.math.ucla.edu/~tom/Stopping/Contents.html.
- ↑ Hill, Theodore P. (2009). "Knowing When to Stop". American Scientist 97 (2): 126–133. doi:10.1511/2009.77.126. ISSN 1545-2786.
- (For French translation, see cover story in the July issue of Pour la Science (2009).)
- ↑ 4.0 4.1 Peskir, Goran; Shiryaev, Albert (2006). Optimal Stopping and Free-Boundary Problems. Lectures in Mathematics. ETH Zürich. doi:10.1007/978-3-7643-7390-0. ISBN 978-3-7643-2419-3.
- ↑ Øksendal, B.; Sulem, A. (2007). Applied Stochastic Control of Jump Diffusions. doi:10.1007/978-3-540-69826-5. ISBN 978-3-540-69825-8.
- ↑ MacQueen, J.; Miller Jr., R.G. (1960). "Optimal persistence policies". Operations Research 8 (3): 362–380. doi:10.1287/opre.8.3.362. ISSN 0030-364X.
- ↑ Karatzas, Ioannis; Shreve, Steven E. (1998). Methods of Mathematical Finance. Stochastic Modelling and Applied Probability. 39. doi:10.1007/b98840. ISBN 978-0-387-94839-3.
Sources
- Thomas S. Ferguson, Optimal Stopping and Applications, retrieved on 21 June 2007
- Thomas S. Ferguson, "Who solved the secretary problem?" Statistical Science, Vol. 4.,282–296, (1989)
- F. Thomas Bruss. "Sum the odds to one and stop." Annals of Probability, Vol. 28, 1384–1391,(2000)
- F. Thomas Bruss. "The art of a right decision: Why decision makers want to know the odds-algorithm." Newsletter of the European Mathematical Society, Issue 62, 14–20, (2006)
- Rogerson, R.; Shimer, R.; Wright, R. (2005). "Search-theoretic models of the labor market: a survey". Journal of Economic Literature 43 (4): 959–88. doi:10.1257/002205105775362014. http://www.nber.org/papers/w10655.pdf.
Original source: https://en.wikipedia.org/wiki/Optimal stopping.
Read more |