Stopping time

From HandWiki
Revision as of 22:37, 6 February 2024 by Corlink (talk | contribs) (linkage)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Time at which a random variable stops exhibiting a behavior of interest
Example of a stopping time: a hitting time of Brownian motion. The process starts at 0 and is stopped as soon as it hits 1.

In probability theory, in particular in the study of stochastic processes, a stopping time (also Markov time, Markov moment, optional stopping time or optional time[1]) is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time.

Stopping times occur in decision theory, and the optional stopping theorem is an important result in this context. Stopping times are also frequently applied in mathematical proofs to “tame the continuum of time”, as Chung put it in his book (1982).

Definition

Discrete time

Let [math]\displaystyle{ \tau }[/math] be a random variable, which is defined on the filtered probability space [math]\displaystyle{ (\Omega, \mathcal F, (\mathcal F_n)_{n \in \N}, P) }[/math] with values in [math]\displaystyle{ \mathbb N \cup \{ +\infty \} }[/math]. Then [math]\displaystyle{ \tau }[/math] is called a stopping time (with respect to the filtration [math]\displaystyle{ \mathbb F= ((\mathcal F_n)_{n \in \N} }[/math]), if the following condition holds:

[math]\displaystyle{ \{ \tau =n \} \in \mathcal F_n }[/math] for all [math]\displaystyle{ n }[/math]

Intuitively, this condition means that the "decision" of whether to stop at time [math]\displaystyle{ n }[/math] must be based only on the information present at time [math]\displaystyle{ n }[/math], not on any future information.

General case

Let [math]\displaystyle{ \tau }[/math] be a random variable, which is defined on the filtered probability space [math]\displaystyle{ (\Omega, \mathcal F, (\mathcal F_t)_{t \in T}, P) }[/math] with values in [math]\displaystyle{ T }[/math]. In most cases, [math]\displaystyle{ T=[0,+ \infty) }[/math]. Then [math]\displaystyle{ \tau }[/math] is called a stopping time (with respect to the filtration [math]\displaystyle{ \mathbb F= (\mathcal F_t)_{t \in T} }[/math]), if the following condition holds:

[math]\displaystyle{ \{ \tau \leq t \} \in \mathcal F_t }[/math] for all [math]\displaystyle{ t \in T }[/math]

As adapted process

Let [math]\displaystyle{ \tau }[/math] be a random variable, which is defined on the filtered probability space [math]\displaystyle{ (\Omega, \mathcal F, (\mathcal F_t)_{t \in T}, P) }[/math] with values in [math]\displaystyle{ T }[/math]. Then [math]\displaystyle{ \tau }[/math] is called a stopping time iff the stochastic process [math]\displaystyle{ X=(X_t)_{t \in T} }[/math], defined by

[math]\displaystyle{ X_t:= \begin{cases} 1 & \text{ if } t \lt \tau \\ 0 &\text{ if } t \geq \tau \end{cases} }[/math]

is adapted to the filtration [math]\displaystyle{ \mathbb F= (\mathcal F_t)_{t \in T} }[/math]

Comments

Some authors explicitly exclude cases where [math]\displaystyle{ \tau }[/math] can be [math]\displaystyle{ + \infty }[/math], whereas other authors allow [math]\displaystyle{ \tau }[/math] to take any value in the closure of [math]\displaystyle{ T }[/math].

Examples

To illustrate some examples of random times that are stopping rules and some that are not, consider a gambler playing roulette with a typical house edge, starting with $100 and betting $1 on red in each game:

  • Playing exactly five games corresponds to the stopping time τ = 5, and is a stopping rule.
  • Playing until they either run out of money or have played 500 games is a stopping rule.
  • Playing until they obtain the maximum amount ahead they will ever be is not a stopping rule and does not provide a stopping time, as it requires information about the future as well as the present and past.
  • Playing until they double their money (borrowing if necessary) is not a stopping rule, as there is a positive probability that they will never double their money.[clarification needed (see talk)]
  • Playing until they either double their money or run out of money is a stopping rule, even though there is potentially no limit to the number of games they play, since the probability that they stop in a finite time is 1.

To illustrate the more general definition of stopping time, consider Brownian motion, which is a stochastic process [math]\displaystyle{ (B_t)_{t\geq 0} }[/math], where each [math]\displaystyle{ B_t }[/math] is a random variable defined on the probability space [math]\displaystyle{ (\Omega, \mathcal{F}, \mathbb{P}) }[/math]. We define a filtration on this probability space by letting [math]\displaystyle{ \mathcal{F}_t }[/math] be the σ-algebra generated by all the sets of the form [math]\displaystyle{ (B_s)^{-1}(A) }[/math] where [math]\displaystyle{ 0\leq s \leq t }[/math] and [math]\displaystyle{ A\subseteq \mathbb{R} }[/math] is a Borel set. Intuitively, an event E is in [math]\displaystyle{ \mathcal{F}_t }[/math] if and only if we can determine whether E is true or false just by observing the Brownian motion from time 0 to time t.

  • Every constant [math]\displaystyle{ \tau:=t_0 }[/math] is (trivially) a stopping time; it corresponds to the stopping rule "stop at time [math]\displaystyle{ t_0 }[/math]".
  • Let [math]\displaystyle{ a\in\mathbb{R}. }[/math] Then [math]\displaystyle{ \tau:=\inf \{t\geq 0 \mid B_t = a\} }[/math] is a stopping time for Brownian motion, corresponding to the stopping rule: "stop as soon as the Brownian motion hits the value a."
  • Another stopping time is given by [math]\displaystyle{ \tau:=\inf \{t\geq 1 \mid B_s \gt 0 \text{ for all } s\in[t-1,t]\} }[/math]. It corresponds to the stopping rule "stop as soon as the Brownian motion has been positive over a contiguous stretch of length 1 time unit."
  • In general, if τ1 and τ2 are stopping times on [math]\displaystyle{ \left(\Omega, \mathcal{F}, \left\{ \mathcal{F}_{t} \right \}_{t \geq 0}, \mathbb{P}\right) }[/math] then their minimum [math]\displaystyle{ \tau _1 \wedge \tau _2 }[/math], their maximum [math]\displaystyle{ \tau _1 \vee \tau _2 }[/math], and their sum τ1 + τ2 are also stopping times. (This is not true for differences and products, because these may require "looking into the future" to determine when to stop.)

Hitting times like the second example above can be important examples of stopping times. While it is relatively straightforward to show that essentially all stopping times are hitting times,[2] it can be much more difficult to show that a certain hitting time is a stopping time. The latter types of results are known as the Début theorem.

Localization

Stopping times are frequently used to generalize certain properties of stochastic processes to situations in which the required property is satisfied in only a local sense. First, if X is a process and τ is a stopping time, then Xτ is used to denote the process X stopped at time τ.

[math]\displaystyle{ X^\tau_t=X_{\min(t,\tau)} }[/math]

Then, X is said to locally satisfy some property P if there exists a sequence of stopping times τn, which increases to infinity and for which the processes

[math]\displaystyle{ \mathbf{1}_{\{\tau_n\gt 0\}}X^{\tau_n} }[/math]

satisfy property P. Common examples, with time index set I = [0, ∞), are as follows:

Local martingale process. A process X is a local martingale if it is càdlàg[clarification needed] and there exists a sequence of stopping times τn increasing to infinity, such that

[math]\displaystyle{ \mathbf{1}_{\{\tau_n\gt 0\}}X^{\tau_n} }[/math]

is a martingale for each n.

Locally integrable process. A non-negative and increasing process X is locally integrable if there exists a sequence of stopping times τn increasing to infinity, such that

[math]\displaystyle{ \operatorname{E} \left [\mathbf{1}_{\{\tau_n\gt 0\}}X^{\tau_n} \right ]\lt \infty }[/math]

for each n.

Types of stopping times

Stopping times, with time index set I = [0,∞), are often divided into one of several types depending on whether it is possible to predict when they are about to occur.

A stopping time τ is predictable if it is equal to the limit of an increasing sequence of stopping times τn satisfying τn < τ whenever τ > 0. The sequence τn is said to announce τ, and predictable stopping times are sometimes known as announceable. Examples of predictable stopping times are hitting times of continuous and adapted processes. If τ is the first time at which a continuous and real valued process X is equal to some value a, then it is announced by the sequence τn, where τn is the first time at which X is within a distance of 1/n of a.

Accessible stopping times are those that can be covered by a sequence of predictable times. That is, stopping time τ is accessible if, P(τ = τn for some n) = 1, where τn are predictable times.

A stopping time τ is totally inaccessible if it can never be announced by an increasing sequence of stopping times. Equivalently, P(τ = σ < ∞) = 0 for every predictable time σ. Examples of totally inaccessible stopping times include the jump times of Poisson processes.

Every stopping time τ can be uniquely decomposed into an accessible and totally inaccessible time. That is, there exists a unique accessible stopping time σ and totally inaccessible time υ such that τ = σ whenever σ < ∞, τ = υ whenever υ < ∞, and τ = ∞ whenever σ = υ = ∞. Note that in the statement of this decomposition result, stopping times do not have to be almost surely finite, and can equal ∞.

Stopping rules in clinical trials

Clinical trials in medicine often perform interim analysis, in order to determine whether the trial has already met its endpoints. However, interim analysis create the risk of false-positive results, and therefore stopping boundaries are used to determine the number and timing of interim analysis (also known as alpha-spending, to denote the rate of false positives). At each of R interim tests, the trial is stopped if the likelihood is below a threshold p, which depends on the method used. See Sequential analysis.

See also

References

  1. Kallenberg, Olav (2017). Random Measures, Theory and Applications. Probability Theory and Stochastic Modelling. 77. Switzerland: Springer. pp. 347. doi:10.1007/978-3-319-41598-7. ISBN 978-3-319-41596-3. 
  2. Fischer, Tom (2013). "On simple representations of stopping times and stopping time sigma-algebras". Statistics and Probability Letters 83 (1): 345–349. doi:10.1016/j.spl.2012.09.024. 

Further reading