Singular control
In optimal control, problems of singular control are problems that are difficult to solve because a straightforward application of Pontryagin's minimum principle fails to yield a complete solution. Only a few such problems have been solved, such as Merton's portfolio problem in financial economics or trajectory optimization in aeronautics. A more technical explanation follows. The most common difficulty in applying Pontryagin's principle arises when the Hamiltonian depends linearly on the control [math]\displaystyle{ u }[/math], i.e., is of the form: [math]\displaystyle{ H(u)=\phi(x,\lambda,t)u+\cdots }[/math] and the control is restricted to being between an upper and a lower bound: [math]\displaystyle{ a\le u(t)\le b }[/math]. To minimize [math]\displaystyle{ H(u) }[/math], we need to make [math]\displaystyle{ u }[/math] as big or as small as possible, depending on the sign of [math]\displaystyle{ \phi(x,\lambda,t) }[/math], specifically:
- [math]\displaystyle{ u(t) = \begin{cases} b, & \phi(x,\lambda,t)\lt 0 \\ ?, & \phi(x,\lambda,t)=0 \\ a, & \phi(x,\lambda,t)\gt 0.\end{cases} }[/math]
If [math]\displaystyle{ \phi }[/math] is positive at some times, negative at others and is only zero instantaneously, then the solution is straightforward and is a bang-bang control that switches from [math]\displaystyle{ b }[/math] to [math]\displaystyle{ a }[/math] at times when [math]\displaystyle{ \phi }[/math] switches from negative to positive.
The case when [math]\displaystyle{ \phi }[/math] remains at zero for a finite length of time [math]\displaystyle{ t_1\le t\le t_2 }[/math] is called the singular control case. Between [math]\displaystyle{ t_1 }[/math] and [math]\displaystyle{ t_2 }[/math] the maximization of the Hamiltonian with respect to [math]\displaystyle{ u }[/math] gives us no useful information and the solution in that time interval is going to have to be found from other considerations. One approach is to repeatedly differentiate [math]\displaystyle{ \partial H/\partial u }[/math] with respect to time until the control u again explicitly appears, though this is not guaranteed to happen eventually. One can then set that expression to zero and solve for u. This amounts to saying that between [math]\displaystyle{ t_1 }[/math] and [math]\displaystyle{ t_2 }[/math] the control [math]\displaystyle{ u }[/math] is determined by the requirement that the singularity condition continues to hold. The resulting so-called singular arc, if it is optimal, will satisfy the Kelley condition:[1]
- [math]\displaystyle{ (-1)^k \frac{\partial}{\partial u} \left[ {\left( \frac{d}{dt} \right)}^{2k} H_u \right] \ge 0 ,\, k=0,1,\cdots }[/math]
Others refer to this condition as the generalized Legendre–Clebsch condition.
The term bang-singular control refers to a control that has a bang-bang portion as well as a singular portion.
References
- ↑ Zelikin, M. I.; Borisov, V. F. (2005). "Singular Optimal Regimes in Problems of Mathematical Economics". Journal of Mathematical Sciences 130 (1): 4409–4570 [Theorem 11.1]. doi:10.1007/s10958-005-0350-5.
External links
- Bryson, Arthur E. Jr.; Ho, Yu-Chi (1969). "Singular Solutions of Optimization and Control Problems". Applied Optimal Control. Waltham: Blaisdell. pp. 246–270. ISBN 9780891162285. https://books.google.com/books?id=P4TKxn7qW5kC&pg=PA246.
Original source: https://en.wikipedia.org/wiki/Singular control.
Read more |