Control-Lyapunov function

From HandWiki

In control theory, a control-Lyapunov function (CLF)[1][2]Cite error: Closing </ref> missing for <ref> tag It was later shown by Francis H. Clarke that every asymptotically controllable system can be stabilized by a (generally discontinuous) feedback.[3] Artstein proved that the dynamical system (2) has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback u(x).

Constructing the Stabilizing Input

It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system (2), Sontag's formula (or Sontag's universal formula) gives the feedback law [math]\displaystyle{ k : \mathbb{R}^n \to \mathbb{R}^m }[/math] directly in terms of the derivatives of the CLF.[4]:Eq. 5.56 In the special case of a single input system [math]\displaystyle{ (m=1) }[/math], Sontag's formula is written as

[math]\displaystyle{ k(x) = \begin{cases} \displaystyle -\frac{L_{f} V(x)+\sqrt{\left[L_{f} V(x)\right]^{2}+\left[L_{g} V(x)\right]^{4}}}{L_{g} V(x)} & \text { if } L_{g} V(x) \neq 0 \\ 0 & \text { if } L_{g} V(x)=0 \end{cases} }[/math]

where [math]\displaystyle{ L_f V(x) := \langle \nabla V(x), f(x)\rangle }[/math] and [math]\displaystyle{ L_g V(x) := \langle \nabla V(x), g(x)\rangle }[/math] are the Lie derivatives of [math]\displaystyle{ V }[/math] along [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g }[/math], respectively.

For the general nonlinear system (1), the input [math]\displaystyle{ u }[/math] can be found by solving a static non-linear programming problem

[math]\displaystyle{ u^*(x) = \underset{u}{\operatorname{arg\,min}} \nabla V(x) \cdot f(x,u) }[/math]

for each state x.

Example

Here is a characteristic example of applying a Lyapunov candidate function to a control problem.

Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by

[math]\displaystyle{ m(1+q^2)\ddot{q}+b\dot{q}+K_0q+K_1q^3=u }[/math]

Now given the desired state, [math]\displaystyle{ q_d }[/math], and actual state, [math]\displaystyle{ q }[/math], with error, [math]\displaystyle{ e = q_d - q }[/math], define a function [math]\displaystyle{ r }[/math] as

[math]\displaystyle{ r=\dot{e}+\alpha e }[/math]

A Control-Lyapunov candidate is then

[math]\displaystyle{ r \mapsto V(r) :=\frac{1}{2}r^2 }[/math]

which is positive for all [math]\displaystyle{ r \ne 0 }[/math].

Now taking the time derivative of [math]\displaystyle{ V }[/math]

[math]\displaystyle{ \dot{V}=r\dot{r} }[/math]
[math]\displaystyle{ \dot{V}=(\dot{e}+\alpha e)(\ddot{e}+\alpha \dot{e}) }[/math]

The goal is to get the time derivative to be

[math]\displaystyle{ \dot{V}=-\kappa V }[/math]

which is globally exponentially stable if [math]\displaystyle{ V }[/math] is globally positive definite (which it is).

Hence we want the rightmost bracket of [math]\displaystyle{ \dot{V} }[/math],

[math]\displaystyle{ (\ddot{e}+\alpha \dot{e})=(\ddot{q}_d-\ddot{q}+\alpha \dot{e}) }[/math]

to fulfill the requirement

[math]\displaystyle{ (\ddot{q}_d-\ddot{q}+\alpha \dot{e}) = -\frac{\kappa}{2}(\dot{e}+\alpha e) }[/math]

which upon substitution of the dynamics, [math]\displaystyle{ \ddot{q} }[/math], gives

[math]\displaystyle{ \left(\ddot{q}_d-\frac{u-K_0q-K_1q^3-b\dot{q}}{m(1+q^2)}+\alpha \dot{e}\right) = -\frac{\kappa}{2}(\dot{e}+\alpha e) }[/math]

Solving for [math]\displaystyle{ u }[/math] yields the control law

[math]\displaystyle{ u= m(1+q^2)\left(\ddot{q}_d + \alpha \dot{e}+\frac{\kappa}{2}r\right)+K_0q+K_1q^3+b\dot{q} }[/math]

with [math]\displaystyle{ \kappa }[/math] and [math]\displaystyle{ \alpha }[/math], both greater than zero, as tunable parameters

This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected

[math]\displaystyle{ \dot{V}=-\kappa V }[/math]

which is a linear first order differential equation which has solution

[math]\displaystyle{ V=V(0)\exp(-\kappa t) }[/math]

And hence the error and error rate, remembering that [math]\displaystyle{ V=\frac{1}{2}(\dot{e}+\alpha e)^2 }[/math], exponentially decay to zero.

If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for [math]\displaystyle{ V }[/math] and solve for [math]\displaystyle{ e }[/math]. This is left as an exercise for the reader but the first few steps at the solution are:

[math]\displaystyle{ r\dot{r}=-\frac{\kappa}{2}r^2 }[/math]
[math]\displaystyle{ \dot{r}=-\frac{\kappa}{2}r }[/math]
[math]\displaystyle{ r=r(0)\exp\left(-\frac{\kappa}{2} t\right) }[/math]
[math]\displaystyle{ \dot{e}+\alpha e= (\dot{e}(0)+\alpha e(0))\exp\left(-\frac{\kappa}{2} t\right) }[/math]

which can then be solved using any linear differential equation methods.

References

  1. Isidori, A. (1995). Nonlinear Control Systems. Springer. ISBN 978-3-540-19916-8. 
  2. Freeman, Randy A.; Petar V. Kokotović (2008). "Robust Control Lyapunov Functions". Robust Nonlinear Control Design (illustrated, reprint ed.). Birkhäuser. pp. 33–63. doi:10.1007/978-0-8176-4759-9_3. ISBN 978-0-8176-4758-2. https://link.springer.com/chapter/10.1007/978-0-8176-4759-9_3. Retrieved 2009-03-04. 
  3. Clarke, F.H.; Ledyaev, Y.S.; Sontag, E.D.; Subbotin, A.I. (1997). "Asymptotic controllability implies feedback stabilization". IEEE Trans. Autom. Control 42 (10): 1394–1407. doi:10.1109/9.633828. 
  4. Cite error: Invalid <ref> tag; no text was provided for refs named Sontag (1998)


See also