Powell's dog leg method

From HandWiki

Powell's dog leg method, also called Powell's hybrid method, is an iterative optimisation algorithm for the solution of non-linear least squares problems, introduced in 1970 by Michael J. D. Powell.[1] Similarly to the Levenberg–Marquardt algorithm, it combines the Gauss–Newton algorithm with gradient descent, but it uses an explicit trust region. At each iteration, if the step from the Gauss–Newton algorithm is within the trust region, it is used to update the current solution. If not, the algorithm searches for the minimum of the objective function along the steepest descent direction, known as Cauchy point. If the Cauchy point is outside of the trust region, it is truncated to the boundary of the latter and it is taken as the new solution. If the Cauchy point is inside the trust region, the new solution is taken at the intersection between the trust region boundary and the line joining the Cauchy point and the Gauss-Newton step (dog leg step).[2] The name of the method derives from the resemblance between the construction of the dog leg step and the shape of a dogleg hole in golf.[2]

Formulation

Construction of the dog leg step

Given a least squares problem in the form

[math]\displaystyle{ F(\boldsymbol{x}) = \frac{1}{2} \left\| \boldsymbol{f} (\boldsymbol{x}) \right\|^2 = \frac{1}{2} \sum_{i=1}^m \left( f_i(\boldsymbol{x}) \right)^2 }[/math]

with [math]\displaystyle{ f_i: \mathbb{R}^n \to \mathbb{R} }[/math], Powell's dog leg method finds the optimal point [math]\displaystyle{ \boldsymbol{x}^* = \operatorname{argmin}_{\boldsymbol{x}} F(\boldsymbol{x}) }[/math] by constructing a sequence [math]\displaystyle{ \boldsymbol{x}_k = \boldsymbol{x}_{k-1} + \delta_k }[/math] that converges to [math]\displaystyle{ \boldsymbol{x}^* }[/math]. At a given iteration, the Gauss–Newton step is given by

[math]\displaystyle{ \boldsymbol{\delta_{gn}} = - \left( \boldsymbol{J}^\top \boldsymbol{J} \right)^{-1} \boldsymbol{J}^\top \boldsymbol{f}(\boldsymbol{x}) }[/math]

where [math]\displaystyle{ \boldsymbol{J} = \left( \frac{\partial{f_i}}{\partial{x_j}} \right) }[/math] is the Jacobian matrix, while the steepest descent direction is given by

[math]\displaystyle{ \boldsymbol{\delta_{sd}} = - \boldsymbol{J}^\top \boldsymbol{f}(\boldsymbol{x}) . }[/math]

The objective function is linearised along the steepest descent direction

[math]\displaystyle{ \begin{align} F(\boldsymbol{x} + t \boldsymbol{\delta_{sd}}) &\approx \frac{1}{2} \left\| \boldsymbol{f}(\boldsymbol{x}) + t \boldsymbol{J}(\boldsymbol{x}) \boldsymbol{\delta_{sd}} \right\|^2 \\ &= F(\boldsymbol{x}) + t \boldsymbol{\delta_{sd}}^\top \boldsymbol{J}^\top \boldsymbol{f}(\boldsymbol{x}) + \frac{1}{2} t^2 \left\| \boldsymbol{J} \boldsymbol{\delta_{sd}} \right\|^2 . \end{align} }[/math]

To compute the value of the parameter [math]\displaystyle{ t }[/math] at the Cauchy point, the derivative of the last expression with respect to [math]\displaystyle{ t }[/math] is imposed to be equal to zero, giving

[math]\displaystyle{ t = -\frac{\boldsymbol{\delta_{sd}}^\top \boldsymbol{J}^\top \boldsymbol{f}(\boldsymbol{x})}{\left\| \boldsymbol{J} \boldsymbol{\delta_{sd}} \right\|^2} = \frac{\left\| \boldsymbol{\delta_{sd}} \right\|^2}{\left\| \boldsymbol{J} \boldsymbol{\delta_{sd}} \right\|^2}. }[/math]


Given a trust region of radius [math]\displaystyle{ \Delta }[/math], Powell's dog leg method selects the update step [math]\displaystyle{ \boldsymbol{\delta_k} }[/math] as equal to:

  • [math]\displaystyle{ \boldsymbol{\delta_{gn}} }[/math], if the Gauss–Newton step is within the trust region ([math]\displaystyle{ \left\| \boldsymbol{\delta_{gn}} \right\| \le \Delta }[/math]);
  • [math]\displaystyle{ \frac{\Delta}{\left\| \boldsymbol{\delta_{sd}} \right\|} \boldsymbol{\delta_{sd}} }[/math] if both the Gauss–Newton and the steepest descent steps are outside the trust region ([math]\displaystyle{ t \left\| \boldsymbol{\delta_{sd}} \right\| }[/math]);
  • [math]\displaystyle{ t \boldsymbol{\delta_{sd}} + s \left( \boldsymbol{\delta_{gn}} - t \boldsymbol{\delta_{sd}} \right) }[/math] with [math]\displaystyle{ s }[/math] such that [math]\displaystyle{ \left\| \boldsymbol{\delta} \right\| = \Delta }[/math], if the Gauss–Newton step is outside the trust region but the steepest descent step is inside (dog leg step).[1]

References

  1. 1.0 1.1 Powell (1970)
  2. 2.0 2.1 Yuan (2000)

Sources

  • Lourakis, M.L.A.; Argyros, A.A. (2005). "Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment?". Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1. pp. 1526-1531. doi:10.1109/ICCV.2005.128. ISBN 0-7695-2334-X. 
  • Yuan, Ya-xiang (2000). "A review of trust region algorithms for optimization". 99. 
  • Powell, M.J.D. (1970). "A new algorithm for unconstrained optimization". in Rosen, J.B.; Mangasarian, O.L.; Ritter, K.. Nonlinear Programming. New York: Academic Press. pp. 31–66. 
  • Powell, M.J.D. (1970). "A hybrid method for nonlinear equations". in Robinowitz, P.. Numerical Methods for Nonlinear Algebraic Equations. London: Gordon and Breach Science. pp. 87–144. 

External links