Differential dynamic programming

From HandWiki
Short description: Algorithm for trajectory optimization

Differential dynamic programming (DDP) is an optimal control algorithm of the trajectory optimization class. The algorithm was introduced in 1966 by Mayne[1] and subsequently analysed in Jacobson and Mayne's eponymous book.[2] The algorithm uses locally-quadratic models of the dynamics and cost functions, and displays quadratic convergence. It is closely related to Pantoja's step-wise Newton's method.[3][4]

Finite-horizon discrete-time problems

The dynamics

[math]\displaystyle{ \mathbf{x}_{i+1} = \mathbf{f}(\mathbf{x}_i,\mathbf{u}_i) }[/math]

 

 

 

 

(1)

describe the evolution of the state [math]\displaystyle{ \textstyle\mathbf{x} }[/math] given the control [math]\displaystyle{ \mathbf{u} }[/math] from time [math]\displaystyle{ i }[/math] to time [math]\displaystyle{ i+1 }[/math]. The total cost [math]\displaystyle{ J_0 }[/math] is the sum of running costs [math]\displaystyle{ \textstyle\ell }[/math] and final cost [math]\displaystyle{ \ell_f }[/math], incurred when starting from state [math]\displaystyle{ \mathbf{x} }[/math] and applying the control sequence [math]\displaystyle{ \mathbf{U} \equiv \{\mathbf{u}_0,\mathbf{u}_1\dots,\mathbf{u}_{N-1}\} }[/math] until the horizon is reached:

[math]\displaystyle{ J_0(\mathbf{x},\mathbf{U})=\sum_{i=0}^{N-1}\ell(\mathbf{x}_i,\mathbf{u}_i) + \ell_f(\mathbf{x}_N), }[/math]

where [math]\displaystyle{ \mathbf{x}_0\equiv\mathbf{x} }[/math], and the [math]\displaystyle{ \mathbf{x}_i }[/math] for [math]\displaystyle{ i\gt 0 }[/math] are given by Eq. 1. The solution of the optimal control problem is the minimizing control sequence [math]\displaystyle{ \mathbf{U}^*(\mathbf{x})\equiv \operatorname{argmin}_{\mathbf{U}} J_0(\mathbf{x},\mathbf{U}). }[/math] Trajectory optimization means finding [math]\displaystyle{ \mathbf{U}^*(\mathbf{x}) }[/math] for a particular [math]\displaystyle{ \mathbf{x}_0 }[/math], rather than for all possible initial states.

Dynamic programming

Let [math]\displaystyle{ \mathbf{U}_i }[/math] be the partial control sequence [math]\displaystyle{ \mathbf{U}_i \equiv \{\mathbf{u}_i,\mathbf{u}_{i+1}\dots,\mathbf{u}_{N-1}\} }[/math] and define the cost-to-go [math]\displaystyle{ J_i }[/math] as the partial sum of costs from [math]\displaystyle{ i }[/math] to [math]\displaystyle{ N }[/math]:

[math]\displaystyle{ J_i(\mathbf{x},\mathbf{U}_i)=\sum_{j=i}^{N-1}\ell(\mathbf{x}_j,\mathbf{u}_j) + \ell_f(\mathbf{x}_N). }[/math]

The optimal cost-to-go or value function at time [math]\displaystyle{ i }[/math] is the cost-to-go given the minimizing control sequence:

[math]\displaystyle{ V(\mathbf{x},i)\equiv \min_{\mathbf{U}_i}J_i(\mathbf{x},\mathbf{U}_i). }[/math]

Setting [math]\displaystyle{ V(\mathbf{x},N)\equiv \ell_f(\mathbf{x}_N) }[/math], the dynamic programming principle reduces the minimization over an entire sequence of controls to a sequence of minimizations over a single control, proceeding backwards in time:

[math]\displaystyle{ V(\mathbf{x},i)= \min_{\mathbf{u}}[\ell(\mathbf{x},\mathbf{u}) + V(\mathbf{f}(\mathbf{x},\mathbf{u}),i+1)]. }[/math]

 

 

 

 

(2)

This is the Bellman equation.

Differential dynamic programming

DDP proceeds by iteratively performing a backward pass on the nominal trajectory to generate a new control sequence, and then a forward-pass to compute and evaluate a new nominal trajectory. We begin with the backward pass. If

[math]\displaystyle{ \ell(\mathbf{x},\mathbf{u}) + V(\mathbf{f}(\mathbf{x},\mathbf{u}),i+1) }[/math]

is the argument of the [math]\displaystyle{ \min[\cdot] }[/math] operator in Eq. 2, let [math]\displaystyle{ Q }[/math] be the variation of this quantity around the [math]\displaystyle{ i }[/math]-th [math]\displaystyle{ (\mathbf{x},\mathbf{u}) }[/math] pair:

[math]\displaystyle{ \begin{align}Q(\delta\mathbf{x},\delta\mathbf{u})\equiv &\ell(\mathbf{x}+\delta\mathbf{x},\mathbf{u}+\delta\mathbf{u})&&{}+V(\mathbf{f}(\mathbf{x}+\delta\mathbf{x},\mathbf{u}+\delta\mathbf{u}),i+1) \\ -&\ell(\mathbf{x},\mathbf{u})&&{}-V(\mathbf{f}(\mathbf{x},\mathbf{u}),i+1) \end{align} }[/math]

and expand to second order

[math]\displaystyle{ \approx\frac{1}{2} \begin{bmatrix} 1\\ \delta\mathbf{x}\\ \delta\mathbf{u} \end{bmatrix}^\mathsf{T} \begin{bmatrix} 0 & Q_\mathbf{x}^\mathsf{T} & Q_\mathbf{u}^\mathsf{T}\\ Q_\mathbf{x} & Q_{\mathbf{x}\mathbf{x}} & Q_{\mathbf{x}\mathbf{u}}\\ Q_\mathbf{u} & Q_{\mathbf{u}\mathbf{x}} & Q_{\mathbf{u}\mathbf{u}} \end{bmatrix} \begin{bmatrix} 1\\ \delta\mathbf{x}\\ \delta\mathbf{u} \end{bmatrix} }[/math]

 

 

 

 

(3)

The [math]\displaystyle{ Q }[/math] notation used here is a variant of the notation of Morimoto where subscripts denote differentiation in denominator layout.[5] Dropping the index [math]\displaystyle{ i }[/math] for readability, primes denoting the next time-step [math]\displaystyle{ V'\equiv V(i+1) }[/math], the expansion coefficients are

[math]\displaystyle{ \begin{alignat}{2} Q_\mathbf{x} &= \ell_\mathbf{x}+ \mathbf{f}_\mathbf{x}^\mathsf{T} V'_\mathbf{x} \\ Q_\mathbf{u} &= \ell_\mathbf{u}+ \mathbf{f}_\mathbf{u}^\mathsf{T} V'_\mathbf{x} \\ Q_{\mathbf{x}\mathbf{x}} &= \ell_{\mathbf{x}\mathbf{x}} + \mathbf{f}_\mathbf{x}^\mathsf{T} V'_{\mathbf{x}\mathbf{x}}\mathbf{f}_\mathbf{x}+V_\mathbf{x}'\cdot\mathbf{f}_{\mathbf{x}\mathbf{x}}\\ Q_{\mathbf{u}\mathbf{u}} &= \ell_{\mathbf{u}\mathbf{u}} + \mathbf{f}_\mathbf{u}^\mathsf{T} V'_{\mathbf{x}\mathbf{x}}\mathbf{f}_\mathbf{u}+{V'_\mathbf{x}} \cdot\mathbf{f}_{\mathbf{u} \mathbf{u}}\\ Q_{\mathbf{u}\mathbf{x}} &= \ell_{\mathbf{u}\mathbf{x}} + \mathbf{f}_\mathbf{u}^\mathsf{T} V'_{\mathbf{x}\mathbf{x}}\mathbf{f}_\mathbf{x} + {V'_\mathbf{x}} \cdot \mathbf{f}_{\mathbf{u} \mathbf{x}}. \end{alignat} }[/math]

The last terms in the last three equations denote contraction of a vector with a tensor. Minimizing the quadratic approximation (3) with respect to [math]\displaystyle{ \delta\mathbf{u} }[/math] we have

[math]\displaystyle{ {\delta \mathbf{u}}^* = \operatorname{argmin}\limits_{\delta \mathbf{u}}Q(\delta \mathbf{x},\delta \mathbf{u})=-Q_{\mathbf{u}\mathbf{u}}^{-1}(Q_\mathbf{u}+Q_{\mathbf{u}\mathbf{x}}\delta \mathbf{x}), }[/math]

 

 

 

 

(4)

giving an open-loop term [math]\displaystyle{ \mathbf{k}=-Q_{\mathbf{u}\mathbf{u}}^{-1}Q_\mathbf{u} }[/math] and a feedback gain term [math]\displaystyle{ \mathbf{K}=-Q_{\mathbf{u}\mathbf{u}}^{-1}Q_{\mathbf{u}\mathbf{x}} }[/math]. Plugging the result back into (3), we now have a quadratic model of the value at time [math]\displaystyle{ i }[/math]:

[math]\displaystyle{ \begin{alignat}{2} \Delta V(i) &= &{} -\tfrac{1}{2}Q^T_\mathbf{u} Q_{\mathbf{u}\mathbf{u}}^{-1}Q_\mathbf{u}\\ V_\mathbf{x}(i) &= Q_\mathbf{x} & {}- Q_\mathbf{xu} Q_{\mathbf{u}\mathbf{u}}^{-1}Q_{\mathbf{u}}\\ V_{\mathbf{x}\mathbf{x}}(i) &= Q_{\mathbf{x}\mathbf{x}} &{} - Q_{\mathbf{x}\mathbf{u}}Q_{\mathbf{u}\mathbf{u}}^{-1}Q_{\mathbf{u}\mathbf{x}}. \end{alignat} }[/math]

Recursively computing the local quadratic models of [math]\displaystyle{ V(i) }[/math] and the control modifications [math]\displaystyle{ \{\mathbf{k}(i),\mathbf{K}(i)\} }[/math], from [math]\displaystyle{ i=N-1 }[/math] down to [math]\displaystyle{ i=1 }[/math], constitutes the backward pass. As above, the Value is initialized with [math]\displaystyle{ V(\mathbf{x},N)\equiv \ell_f(\mathbf{x}_N) }[/math]. Once the backward pass is completed, a forward pass computes a new trajectory:

[math]\displaystyle{ \begin{align} \hat{\mathbf{x}}(1)&=\mathbf{x}(1)\\ \hat{\mathbf{u}}(i)&=\mathbf{u}(i) + \mathbf{k}(i) +\mathbf{K}(i)(\hat{\mathbf{x}}(i) - \mathbf{x}(i))\\ \hat{\mathbf{x}}(i+1)&=\mathbf{f}(\hat{\mathbf{x}}(i),\hat{\mathbf{u}}(i)) \end{align} }[/math]

The backward passes and forward passes are iterated until convergence.

Regularization and line-search

Differential dynamic programming is a second-order algorithm like Newton's method. It therefore takes large steps toward the minimum and often requires regularization and/or line-search to achieve convergence.[6][7] Regularization in the DDP context means ensuring that the [math]\displaystyle{ Q_{\mathbf{u}\mathbf{u}} }[/math] matrix in Eq. 4 is positive definite. Line-search in DDP amounts to scaling the open-loop control modification [math]\displaystyle{ \mathbf{k} }[/math] by some [math]\displaystyle{ 0\lt \alpha\lt 1 }[/math].

Monte Carlo version

Sampled differential dynamic programming (SaDDP) is a Monte Carlo variant of differential dynamic programming.[8][9][10] It is based on treating the quadratic cost of differential dynamic programming as the energy of a Boltzmann distribution. This way the quantities of DDP can be matched to the statistics of a multidimensional normal distribution. The statistics can be recomputed from sampled trajectories without differentiation.

Sampled differential dynamic programming has been extended to Path Integral Policy Improvement with Differential Dynamic Programming.[11] This creates a link between differential dynamic programming and path integral control,[12] which is a framework of stochastic optimal control.

Constrained problems

Interior Point Differential dynamic programming (IPDDP) is an interior-point method generalization of DDP that can address the optimal control problem with nonlinear state and input constraints.[13]

See also

References

  1. Mayne, D. Q. (1966). "A second-order gradient method of optimizing non-linear discrete time systems". Int J Control 3: 85–95. doi:10.1080/00207176608921369. 
  2. Mayne, David H.; Jacobson, David Q. (1970). Differential dynamic programming. New York: American Elsevier Pub. Co.. ISBN 978-0-444-00070-5. https://books.google.com/books?id=tA-oAAAAIAAJ. 
  3. de O. Pantoja, J. F. A. (1988). "Differential dynamic programming and Newton's method". International Journal of Control 47 (5): 1539–1553. doi:10.1080/00207178808906114. ISSN 0020-7179. 
  4. Liao, L. Z. (1992). "Advantages of differential dynamic programming over Newton's method for discrete-time optimal control problems". https://hdl.handle.net/1813/5474. 
  5. Morimoto, J.; G. Zeglin; C.G. Atkeson (2003). "Minimax differential dynamic programming: Application to a biped walking robot". 2. pp. 1927–1932. 
  6. Liao, L. Z; C. A Shoemaker (1991). "Convergence in unconstrained discrete-time differential dynamic programming". IEEE Transactions on Automatic Control 36 (6): 692. doi:10.1109/9.86943. 
  7. Tassa, Y. (2011). Theory and implementation of bio-mimetic motor controllers (PDF) (Thesis). Hebrew University. Archived from the original (PDF) on 2016-03-04. Retrieved 2012-02-27.
  8. "Sampled differential dynamic programming" (in en-US). doi:10.1109/IROS.2016.7759229. 
  9. Rajamäki, Joose; Hämäläinen, Perttu (June 2018). "Regularizing Sampled Differential Dynamic Programming - IEEE Conference Publication" (in en-US). 2018 Annual American Control Conference (ACC). pp. 2182–2189. doi:10.23919/ACC.2018.8430799. https://ieeexplore.ieee.org/document/8430799. Retrieved 2018-10-19. 
  10. Rajamäki, Joose (2018) (in en). Random Search Algorithms for Optimal Control. Aalto University. ISBN 978-952-60-8156-4. http://urn.fi/URN:ISBN:978-952-60-8156-4. 
  11. Lefebvre, Tom; Crevecoeur, Guillaume (July 2019). "Path Integral Policy Improvement with Differential Dynamic Programming". 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). pp. 739–745. doi:10.1109/AIM.2019.8868359. ISBN 978-1-7281-2493-3. https://ieeexplore.ieee.org/document/8868359. 
  12. Theodorou, Evangelos; Buchli, Jonas; Schaal, Stefan (May 2010). "Reinforcement learning of motor skills in high dimensions: A path integral approach". 2010 IEEE International Conference on Robotics and Automation. pp. 2397–2403. doi:10.1109/ROBOT.2010.5509336. ISBN 978-1-4244-5038-1. https://ieeexplore.ieee.org/document/5509336. 
  13. Pavlov, Andrei; Shames, Iman; Manzie, Chris (2020). "Interior Point Differential Dynamic Programming". arXiv:2004.12710 [math.OC].

External links