Robbins' problem
In probability theory, Robbins' problem of optimal stopping[1], named after Herbert Robbins, is sometimes referred to as the fourth secretary problem or the problem of minimizing the expected rank with full information.[2]
Let X1, ... , Xn be independent, identically distributed random variables, uniform on [0, 1]. We observe the Xk's sequentially and must stop on exactly one of them. No recall of preceding observations is permitted. What stopping rule minimizes the expected rank of the selected observation, and what is its corresponding value?
The general solution to this full-information expected rank problem is unknown. The major difficulty is that the problem is fully history-dependent, that is, the optimal rule depends at every stage on all preceding values, and not only on simpler sufficient statistics of these. Only bounds are known for the limiting value v as n goes to infinity, namely 1.908 < v < 2.329. It is known that there is some room to improve the lower bound by further computations for a truncated version of the problem. It is still not known how to improve on the upper bound which stems from the subclass of memoryless threshold rules.[3][4][5]
It was proposed the continuous time version of the problem where the observations follow a Poisson arrival process of homogeneous rate 1. Under some assumptions, the corresponding value function [math]\displaystyle{ w(t) }[/math] is bounded and Lipschitz continuous, and the differential equation for this value function is derived.[6] The limiting value of [math]\displaystyle{ w(t) }[/math] presents the solution of Robbins’ problem. It is shown that for large [math]\displaystyle{ t }[/math], [math]\displaystyle{ 1\leq w(t)\leq 2.33183 }[/math]. This estimation coincides with the bounds mentioned above.
A simple suboptimal rule, which performs almost as well as the optimal rule, was proposed by Krieger & Samuel-Cahn.[7] The rule stops with the smallest [math]\displaystyle{ i }[/math] such that [math]\displaystyle{ R_i \lt ic/(n + i) }[/math] for a given constant c, where [math]\displaystyle{ R_i }[/math] is the relative rank of the ith observation and n is the total number of items. This rule has added flexibility. A curtailed version thereof can be used to select an item with a given probability [math]\displaystyle{ P }[/math], [math]\displaystyle{ P \lt 1 }[/math]. The rule can be used to select two or more items. The problem of selecting a fixed percentage [math]\displaystyle{ \alpha }[/math], [math]\displaystyle{ 0 \lt \alpha \lt 1 }[/math], of n, is also treated.
Chow-Robbins game
Another optimal stopping problem bearing Robbins' name is the Chow-Robbins game:[8][9]
Given an infinite sequence of IID random variables [math]\displaystyle{ X_1, X_2, ... }[/math] with distribution [math]\displaystyle{ F }[/math], how to decide when to stop, in order to maximize the sample average [math]\displaystyle{ \frac 1n(X_1 + \cdots X_n) }[/math] where [math]\displaystyle{ n }[/math] is the stopping time? The probability of eventually stopping must be 1 (that is, you are not allowed to keep sampling and never stop).
For any distribution [math]\displaystyle{ F }[/math] with finite second moment, there exists an optimal strategy, defined by a sequence of numbers [math]\displaystyle{ \beta_1, \beta_2, ... }[/math]. The strategy is to keep sampling until [math]\displaystyle{ \frac 1n(X_1 + \cdots X_n) \geq \beta_n }[/math].[10][11]
Optimal strategy for very large n
If [math]\displaystyle{ F }[/math] has finite second moment, then after subtracting the mean and dividing by the standard deviation, we get a distribution with mean zero and variance one. Consequently it suffices to study the case of [math]\displaystyle{ F }[/math] with mean zero and variance one.
With this, [math]\displaystyle{ \lim_{n} \beta_n/\sqrt n \approx \alpha = 0.8399236757 }[/math], where [math]\displaystyle{ \alpha }[/math] is the solution to the equation[note 1][math]\displaystyle{ \alpha=\left(1-\alpha^2\right) \int_0^{\infty} e^{\lambda \alpha-\lambda^2 / 2} d \lambda }[/math]which can be proved by solving the same problem with continuous time, with a Wiener process. At the limit of [math]\displaystyle{ n\to \infty }[/math], the discrete time problem becomes the same as the continuous time problem.
This was proved independently[12] by.[13][14][15]
When the game is a fair coin toss game, with heads being +1 and tails being -1, then there is a sharper result[9][math]\displaystyle{ \beta_n=\alpha \sqrt{n}-1 / 2+\frac{(-2 \zeta(-1 / 2)) \sqrt{\alpha}}{\sqrt{\pi}} n^{-1 / 4}+O\left(n^{-7 / 24}\right) }[/math]where [math]\displaystyle{ \zeta }[/math] is the Riemann zeta function.
Optimal strategy for small n
When n is small, the asymptotic bound does not apply, and finding the value of [math]\displaystyle{ \beta_n }[/math] is much more difficult. Even the simplest case, where [math]\displaystyle{ X_1, X_2, ... }[/math] are fair coin tosses, is not fully solved.
For the fair coin toss, a strategy is a binary decision: after [math]\displaystyle{ n }[/math] tosses, with k heads and (n-k) tails, should one continue or should one stop? Since 1D random walk is recurrent, starting at any [math]\displaystyle{ k, (n-k) }[/math], the probability of eventually having more heads than tails is 1. So, if [math]\displaystyle{ k \leq n-k }[/math], one should always continue. However, if [math]\displaystyle{ k \gt n-k }[/math], it is tricky to decide whether to stop or continue.[16]
[17] found an exact solution for all [math]\displaystyle{ n \leq 489241 }[/math].
Elton[9] found exact solutions for all [math]\displaystyle{ n \leq 9.06 \times 10^7 }[/math], and it found an almost always optimal decision rule, of stopping as soon as [math]\displaystyle{ k - (n-k) \geq \Delta k_n }[/math] where[math]\displaystyle{ \Delta k_n = \left\lceil {\alpha \sqrt n \,\, - 1/2\,\, + \,\,\frac{{\left( { - 2\zeta ( - 1/2)} \right)\sqrt \alpha }}{\sqrt \pi }{n^{ - 1/4}}} \right\rceil }[/math]
Importance
One of the motivations to study Robbins' problem is that with its solution all classical (four) secretary problems would be solved. But the major reason is to understand how to cope with full history dependence in a (deceptively easy-looking) problem. On the Ester's Book International Conference in Israel (2006) Robbins' problem was accordingly named one of the four most important problems in the field of optimal stopping and sequential analysis.
History
Herbert Robbins presented the above described problem at the International Conference on Search and Selection in Real Time[note 2] in Amherst, 1990. He concluded his address with the words I should like to see this problem solved before I die. Scientists working in the field of optimal stopping have since called this problem Robbins' problem. Robbins himself died in 2001.
References
- ↑ Chow, Y.S.; Moriguti, S.; Robbins, Herbert Ellis; Samuels, Stephen M. (1964). "Optimal Selection Based on Relative Rank". Israel Journal of Mathematics 2 (2): 81–90. doi:10.1007/bf02759948.
- ↑ Bruss, F. Thomas (2005). "What is known about Robbins' Problem?". Journal of Applied Probability 42: 108-120. doi:10.1239/jap/1110381374. https://www.jstor.org/stable/30040773.
- ↑ Bruss, F.Thomas; Ferguson, S. Thomas (1993). "Minimizing the expected rank with full information". Journal of Applied Probability 30 (3): 616–626. doi:10.1007/bf02759948. ISSN 00219002. http://www.jstor.org/stable/3214770.
- ↑ Bruss, F.Thomas; Ferguson, S. Thomas (1996). "Half-Prophets and Robbins' Problem of Minimizing the expected rank". Athens Conference on Applied Probability and Time Series Analysis. 114. New York, NY: Springer New York. pp. 1–17. doi:10.1007/978-1-4612-0749-8_1. ISBN 978-0-387-94788-4.
- ↑ Assaf, David; Samuel-Cahn, Ester (1996). "The secretary problem: Minimizing the expected rank with i.i.d. random variables". Advances in Applied Probability 28 (3): 828--852. doi:10.2307/1428183. ISSN 0001-8678.
- ↑ Bruss, F. Thomas; Swan, Yvik C. (2009). "What is known about Robbins' Problem?". Journal of Applied Probability 46: 1-18. doi:10.1239/jap/1238592113. https://www.jstor.org/stable/30040773.
- ↑ Krieger, Abba M.; Samuel-Cahn, Ester (2009). "The secretary problem of minimizing the expected rank: a simple suboptimal approach with generalization". Advances in Applied Probability 41: 1041-1058. doi:10.1239/aap/1261669585. https://www.jstor.org/stable/27793918.
- ↑ Chow, Y. S.; Robbins, Herbert (September 1965). "On optimal stopping rules for $S_{n}/n$". Illinois Journal of Mathematics 9 (3): 444–454. doi:10.1215/ijm/1256068146. ISSN 0019-2082. https://projecteuclid.org/journals/illinois-journal-of-mathematics/volume-9/issue-3/On-optimal-stopping-rules-for-S_nn/10.1215/ijm/1256068146.full.
- ↑ 9.0 9.1 9.2 Lua error in Module:Citation/CS1/Date_validation at line 764: attempt to index local 'date_string' (a nil value).
- ↑ Dvoretzky, Aryeh. "Existence and properties of certain optimal stopping rules." Proc. Fifth Berkeley Symp. Math. Statist. Prob. Vol. 1. 1967.
- ↑ Teicher, H.; Wolfowitz, J. (1966-12-01). "Existence of optimal stopping rules for linear and quadratic rewards" (in en). Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 5 (4): 361–368. doi:10.1007/BF00535366. ISSN 1432-2064. https://doi.org/10.1007/BF00535366.
- ↑ Simons, Gordon; Yao, Yi-Ching (1989-08-01). "Optimally stopping the sample mean of a Wiener process with an unknown drift" (in en). Stochastic Processes and their Applications 32 (2): 347–354. doi:10.1016/0304-4149(89)90084-7. ISSN 0304-4149. https://www.sciencedirect.com/science/article/pii/0304414989900847.
- ↑ Shepp, L. A. (June 1969). "Explicit Solutions to Some Problems of Optimal Stopping". The Annals of Mathematical Statistics 40 (3): 993–1010. doi:10.1214/aoms/1177697604. ISSN 0003-4851. https://projecteuclid.org/journals/annals-of-mathematical-statistics/volume-40/issue-3/Explicit-Solutions-to-Some-Problems-of-Optimal-Stopping/10.1214/aoms/1177697604.full.
- ↑ Taylor, Howard M. (1968). "Optimal Stopping in a Markov Process". The Annals of Mathematical Statistics 39 (4): 1333–1344. ISSN 0003-4851. https://www.jstor.org/stable/2239702.
- ↑ Walker, Leroy H. (1969). "Regarding stopping rules for Brownian motion and random walks" (in en). Bulletin of the American Mathematical Society 75 (1): 46–50. doi:10.1090/S0002-9904-1969-12140-3. ISSN 0002-9904. https://www.ams.org/bull/1969-75-01/S0002-9904-1969-12140-3/.
- ↑ Häggström, Olle; Wästlund, Johan (2013). "Rigorous Computer Analysis of the Chow–Robbins Game". The American Mathematical Monthly 120 (10): 893. doi:10.4169/amer.math.monthly.120.10.893. https://www.tandfonline.com/doi/full/10.4169/amer.math.monthly.120.10.893.
- ↑ Christensen, Sören; Fischer, Simon (June 2022). "On the Sn/n problem" (in en). Journal of Applied Probability 59 (2): 571–583. doi:10.1017/jpr.2021.73. ISSN 0021-9002. https://www.cambridge.org/core/journals/journal-of-applied-probability/article/abs/on-the-snn-problem/8CE3A5834AC35E7ADD137749F61E27A2.
Footnotes
- ↑
import numpy as np from scipy.integrate import quad from scipy.optimize import root def f(lambda_, alpha): return np.exp(lambda_ * alpha - lambda_**2 / 2) def equation(alpha): integral, error = quad(f, 0, np.inf, args=(alpha)) return integral * (1 - alpha**2) - alpha solution = root(equation, 0.83992, tol=1e-15) # Print the solution if solution.success: print(f"Solved α = {solution.x[0]} with a residual of {solution.fun[0]}") else: print("Solution did not converge")
- ↑ The Joint Summer Research Conferences in the Mathematical Sciences were held at the University of Massachusetts from June 7 to July 4, 1990. These were sponsored by the AMS, SIAM, and the Institute for Mathematical Statistics (IMS). Topics in 1990 were: Probability models and statistical analysis for ranking data, Inverse scattering on the line, Deformation theory of algebras and quantization with applications to physics, Strategies for sequential search and selection in real time, Schottky problems, and Logic, fields, and subanalytic sets.
Original source: https://en.wikipedia.org/wiki/Robbins' problem.
Read more |