Polyharmonic spline: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>GregorB
m Section order per WP:FOOTERS
 
en>CmdrObot
sp: et.al.→et al.
 
Line 1: Line 1:
Alyson is the name people use to call me and [http://kjhkkb.net/xe/notice/374835 online psychic chat] I believe it seems quite great when you say it. My spouse doesn't like it the way I do but what I truly like performing is caving but I don't have the time recently. I am an invoicing officer and I'll be promoted quickly. Alaska is the only place I've been residing in but now I'm contemplating [http://breenq.com/index.php?do=/profile-1144/info/ free psychic reading] other options.<br><br>Feel free to surf to my webpage ... tarot readings; [http://165.132.39.93/xe/visitors/372912 165.132.39.93],
In [[control theory]], a '''control-Lyapunov function''' <math>V(x,u)</math> <ref>Freeman (46)</ref>is a generalization of the notion of [[Lyapunov function]] <math>V(x)</math> used in [[Lyapunov stability|stability]] analysis.  The ordinary Lyapunov function is used to test whether a [[dynamical system]] is ''stable'' (more restrictively, ''asymptotically stable''). That is, whether the system starting in a state <math>x \ne 0</math> in some domain ''D'' will remain in ''D'', or for ''asymptotic stability'' will eventually return to <math>x = 0</math>. The control-Lyapunov function is used to test whether a system is ''feedback stabilizable'', that is whether for any state ''x'' there exists a control <math> u(x,t)</math> such that the system can be brought to the zero state by applying the control ''u''.
 
More formally, suppose we are given a dynamical system
:<math>
\dot{x}(t)=f(x(t))+g(x(t))\, u(t),
</math>
where the state ''x''(''t'') and the control ''u''(''t'') are vectors.
 
'''Definition.'''  A control-Lyapunov function is a function <math>V(x,u)</math> that is continuous, positive-definite (that is V(x,u) is positive except at <math>x=0</math> where it is zero), proper (that is <math>V(x)\to \infty</math> as <math>|x|\to \infty</math>), and such that
:<math>
\forall x \ne 0, \exists u \qquad \dot{V}(x,u) < 0.
</math>
 
The last condition is the key condition; in words it says that for each state ''x'' we can find a control ''u'' that will reduce the "energy" ''V''. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy to zero, that is to bring the system to a stop. This is made rigorous by the following result:
 
'''Artstein's theorem.'''  The dynamical system has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback ''u''(''x'').
 
It may not be easy to find a control-Lyapunov function for a given system, but if we can find one thanks to some ingenuity and luck, then the feedback stabilization problem simplifies considerably, in fact it reduces to solving a static non-linear [[optimization (mathematics)|programming problem]]
:<math>
u^*(x) = \arg\min_u \nabla V(x,u) \cdot f(x,u)
</math>
for each state ''x''.
   
The theory and application of control-Lyapunov functions were developed by Z. Artstein and [[Eduardo D. Sontag|E. D. Sontag]] in the 1980s and 1990s.
==Example==
Here is a characteristic example of applying a Lyapunov candidate function to a control problem.
 
Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by
:<math>
m(1+q^2)\ddot{q}+b\dot{q}+K_0q+K_1q^3=u
</math>
Now given the desired state, <math>q_d</math>, and actual state, <math>q</math>, with error, <math>e = q_d - q</math>, define a function <math>r</math> as
:<math>
r=\dot{e}+\alpha e
</math>
A Control-Lyapunov candidate is then
:<math>
V=\frac{1}{2}r^2
</math>
which is positive definite for all <math> q \ne 0</math>, <math>\dot{q} \ne 0</math>.
 
Now taking the time derivative of <math>V</math>
:<math>
\dot{V}=r\dot{r}
</math>
:<math>
\dot{V}=(\dot{e}+\alpha e)(\ddot{e}+\alpha \dot{e})
</math>
 
The goal is to get the time derivative to be
:<math>
\dot{V}=-\kappa V
</math>
which is globally exponentially stable if <math>V</math> is globally positive definite (which it is).
 
Hence we want the rightmost bracket of <math>\dot{V}</math>,
:<math>
(\ddot{e}+\alpha \dot{e})=(\ddot{q}_d-\ddot{q}+\alpha \dot{e})
</math>
to fulfill the requirement
:<math>
(\ddot{q}_d-\ddot{q}+\alpha \dot{e}) = -\frac{\kappa}{2}(\dot{e}+\alpha e)
</math>
which upon substitution of the dynamics, <math>\ddot{q}</math>, gives
:<math>
(\ddot{q}_d-\frac{u-K_0q-K_1q^3-b\dot{q}}{m(1+q^2)}+\alpha \dot{e}) = -\frac{\kappa}{2}(\dot{e}+\alpha e)
</math>
Solving for <math>u</math> yields the control law
:<math>
u= m(1+q^2)(\ddot{q}_d + \alpha \dot{e}+\frac{\kappa}{2}r )+K_0q+K_1q^3+b\dot{q}
</math>
with <math>\kappa</math> and <math>\alpha</math>, both greater than zero, as tunable parameters
 
This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected
:<math>
\dot{V}=-\kappa V
</math>
which is a linear first order differential equation which has solution
:<math>
V=V(0)e^{-\kappa t}
</math>
 
And hence the error and error rate, remembering that <math>V=\frac{1}{2}(\dot{e}+\alpha e)^2</math>, exponentially decay to zero.
 
If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for <math>V</math> and solve for <math>e</math>. This is left as an exercise for the reader but the first few steps at the solution are:
 
:<math>
r\dot{r}=-\frac{\kappa}{2}r^2
</math>
:<math>
\dot{r}=-\frac{\kappa}{2}r
</math>
:<math>
r=r(0)e^{-\frac{\kappa}{2} t}
</math>
:<math>
\dot{e}+\alpha e= (\dot{e}(0)+\alpha e(0))e^{-\frac{\kappa}{2} t}
</math>
which can then be solved using any linear differential equation methods.
 
==Notes==
 
{{Reflist}}
==References==
 
*{{cite book|last=Freeman|first=Randy A.|coauthors=Petar V. Kokotović|title=Robust Nonlinear Control Design|publisher=Birkhäuser|year=2008|edition=illustrated, reprint|pages=257|isbn=0-8176-4758-9|url=http://books.google.com/books?id=_eTb4Yl0SOEC|accessdate=2009-03-04|language=English}}
==See also==
* [[Artstein's theorem]]
* [[Lyapunov optimization]]
* [[Drift plus penalty]]
 
[[Category:Stability theory]]

Latest revision as of 19:59, 23 June 2013

In control theory, a control-Lyapunov function [1]is a generalization of the notion of Lyapunov function used in stability analysis. The ordinary Lyapunov function is used to test whether a dynamical system is stable (more restrictively, asymptotically stable). That is, whether the system starting in a state in some domain D will remain in D, or for asymptotic stability will eventually return to . The control-Lyapunov function is used to test whether a system is feedback stabilizable, that is whether for any state x there exists a control such that the system can be brought to the zero state by applying the control u.

More formally, suppose we are given a dynamical system

where the state x(t) and the control u(t) are vectors.

Definition. A control-Lyapunov function is a function that is continuous, positive-definite (that is V(x,u) is positive except at where it is zero), proper (that is as ), and such that

The last condition is the key condition; in words it says that for each state x we can find a control u that will reduce the "energy" V. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy to zero, that is to bring the system to a stop. This is made rigorous by the following result:

Artstein's theorem. The dynamical system has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback u(x).

It may not be easy to find a control-Lyapunov function for a given system, but if we can find one thanks to some ingenuity and luck, then the feedback stabilization problem simplifies considerably, in fact it reduces to solving a static non-linear programming problem

for each state x.

The theory and application of control-Lyapunov functions were developed by Z. Artstein and E. D. Sontag in the 1980s and 1990s.

Example

Here is a characteristic example of applying a Lyapunov candidate function to a control problem.

Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by

Now given the desired state, , and actual state, , with error, , define a function as

A Control-Lyapunov candidate is then

which is positive definite for all , .

Now taking the time derivative of

The goal is to get the time derivative to be

which is globally exponentially stable if is globally positive definite (which it is).

Hence we want the rightmost bracket of ,

to fulfill the requirement

which upon substitution of the dynamics, , gives

Solving for yields the control law

with and , both greater than zero, as tunable parameters

This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected

which is a linear first order differential equation which has solution

And hence the error and error rate, remembering that , exponentially decay to zero.

If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for and solve for . This is left as an exercise for the reader but the first few steps at the solution are:

which can then be solved using any linear differential equation methods.

Notes

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

References

  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534

See also

  1. Freeman (46)