Control-Lyapunov function

From HandWiki

In control theory, a control-Lyapunov function (CLF)[1][2]Cite error: Closing </ref> missing for <ref> tag It was later shown by Francis H. Clarke that every asymptotically controllable system can be stabilized by a (generally discontinuous) feedback.[3] Artstein proved that the dynamical system (2) has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback u(x).

Constructing the Stabilizing Input

It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system (2), Sontag's formula (or Sontag's universal formula) gives the feedback law k:nm directly in terms of the derivatives of the CLF.[4]:Eq. 5.56 In the special case of a single input system (m=1), Sontag's formula is written as

k(x)={LfV(x)+[LfV(x)]2+[LgV(x)]4LgV(x) if LgV(x)00 if LgV(x)=0

where LfV(x):=V(x),f(x) and LgV(x):=V(x),g(x) are the Lie derivatives of V along f and g, respectively.

For the general nonlinear system (1), the input u can be found by solving a static non-linear programming problem

u*(x)=argminuV(x)f(x,u)

for each state x.

Example

Here is a characteristic example of applying a Lyapunov candidate function to a control problem.

Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by

m(1+q2)q¨+bq˙+K0q+K1q3=u

Now given the desired state, qd, and actual state, q, with error, e=qdq, define a function r as

r=e˙+αe

A Control-Lyapunov candidate is then

rV(r):=12r2

which is positive for all r0.

Now taking the time derivative of V

V˙=rr˙
V˙=(e˙+αe)(e¨+αe˙)

The goal is to get the time derivative to be

V˙=κV

which is globally exponentially stable if V is globally positive definite (which it is).

Hence we want the rightmost bracket of V˙,

(e¨+αe˙)=(q¨dq¨+αe˙)

to fulfill the requirement

(q¨dq¨+αe˙)=κ2(e˙+αe)

which upon substitution of the dynamics, q¨, gives

(q¨duK0qK1q3bq˙m(1+q2)+αe˙)=κ2(e˙+αe)

Solving for u yields the control law

u=m(1+q2)(q¨d+αe˙+κ2r)+K0q+K1q3+bq˙

with κ and α, both greater than zero, as tunable parameters

This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected

V˙=κV

which is a linear first order differential equation which has solution

V=V(0)exp(κt)

And hence the error and error rate, remembering that V=12(e˙+αe)2, exponentially decay to zero.

If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for V and solve for e. This is left as an exercise for the reader but the first few steps at the solution are:

rr˙=κ2r2
r˙=κ2r
r=r(0)exp(κ2t)
e˙+αe=(e˙(0)+αe(0))exp(κ2t)

which can then be solved using any linear differential equation methods.

References

  1. Isidori, A. (1995). Nonlinear Control Systems. Springer. ISBN 978-3-540-19916-8. 
  2. Freeman, Randy A.; Petar V. Kokotović (2008). "Robust Control Lyapunov Functions". Robust Nonlinear Control Design (illustrated, reprint ed.). Birkhäuser. pp. 33–63. doi:10.1007/978-0-8176-4759-9_3. ISBN 978-0-8176-4758-2. https://link.springer.com/chapter/10.1007/978-0-8176-4759-9_3. Retrieved 2009-03-04. 
  3. Clarke, F.H.; Ledyaev, Y.S.; Sontag, E.D.; Subbotin, A.I. (1997). "Asymptotic controllability implies feedback stabilization". IEEE Trans. Autom. Control 42 (10): 1394–1407. doi:10.1109/9.633828. 
  4. Cite error: Invalid <ref> tag; no text was provided for refs named Sontag (1998)


See also