Bridge probabilities: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Newwhist
Number of possible deals: improve math presentation
 
en>Tedsanders
Reworded awkward sentence
 
Line 1: Line 1:
I'm Luther but I never truly favored that name. Delaware has always been my residing place and will by no means move. What she loves extended [http://jamsuham.net/xe/joinqna/670399 auto warranty] performing  [http://sticker-shocked.com/UserProfile/tabid/43/userId/404/Default.aspx extended car warranty] is bottle tops collecting and she is attempting to make it a profession. She works as a [http://Www.Consumerreports.org/cro/magazine/2014/04/extended-warranties-for-cars-are-an-expensive-game/ monetary officer] and she will not change it whenever soon.<br><br>Feel free to surf  auto warranty to my blog  [http://Auto.Howstuffworks.com/buying-selling/cg-extended-warranty-tips.htm extended] car warranty post :: extended auto warranty ([http://der-erste-clan.de/index.php?mod=users&action=view&id=11094 click over here now])
The '''Hamiltonian''' of [[Optimal control|optimal control theory]] was developed by [[Lev Semyonovich Pontryagin|L. S. Pontryagin]] as part of his [[Pontryagin's minimum principle|minimum principle]].<ref>I. M. Ross ''A Primer on Pontryagin's Principle in Optimal Control'', Collegiate Publishers, 2009.</ref>  It was inspired by, but is distinct from, the [[Hamiltonian mechanics|Hamiltonian]] of classical mechanics.  Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to minimize the HamiltonianFor details see [[Pontryagin's minimum principle]].
 
== Notation and Problem statement ==
 
A control <math>u(t)</math> is to be chosen so as to minimize the objective function
 
:<math>
J(u)=\Psi(x(T))+\int^T_0 L(x,u,t) dt
</math>
 
where <math>x(t)</math> is the system state, which evolves according to the state equations
 
:<math>
\dot{x}=f(x,u,t) \qquad x(0)=x_0 \quad t \in [0,T]
</math>
 
and the control must satisfy the constraints
 
:<math>
a \le u(t) \le b \quad t \in [0,T]
</math>
 
== Definition of the Hamiltonian ==
 
: <math>
H(x,\lambda,u,t)=\lambda^T(t)f(x,u,t)+L(x,u,t) \,
</math>
 
where <math>\lambda(t)</math> is a vector of [[Costate equations|costate]] variables of the same dimension as the state variables <math>x(t)</math>.
 
For information on the properties of the Hamiltonian, see [[Pontryagin's minimum principle]].
 
== The Hamiltonian in discrete time ==
 
When the problem is formulated in discrete time, the Hamiltonian is defined as:
 
:<math>
H(x,\lambda,u,t)=\lambda^T(t+1)f(x,u,t)+L(x,u,t) \,
</math>
 
and the [[costate equations]] are
 
:<math>
\lambda(t)=-\frac{\partial H}{\partial x}
</math>
(Note that the discrete time Hamiltonian at time <math>t</math> involves the costate variable at time <math>t+1.</math><ref>Varaiya, Chapter 6</ref>  This small detail is essential so that when we differentiate with respect to <math>x</math> we get a term involving <math>\lambda(t+1)</math> on the right hand side of the costate equations. Using a wrong convention here can lead to incorrect results, i.e. a costate equation which is not a backwards difference equation).
 
==The Hamiltonian of control compared to the Hamiltonian of mechanics==
 
[[William Rowan Hamilton]] defined the [[Hamiltonian mechanics|Hamiltonian]] as a function of three variables:
 
:<math>\mathcal{H} = \mathcal{H}(p,q,t) = \langle p,\dot{q} \rangle -L(q,\dot{q},t)</math>
 
where <math>\dot{q}</math> is defined implicitly by
 
:<math>p = \frac{\partial L}{\partial \dot{q}}</math>
 
Hamilton then formulated his equations as
 
:<math>\frac{ d}{ dt}p(t) = -\frac{\partial}{\partial q}\mathcal{H}</math>
:<math>\frac{ d}{ dt}q(t) =~~\frac{\partial}{\partial p}\mathcal{H}</math>
 
In contrast the Hamiltonian of control theory (as defined by [[Pontryagin]]) is a function of 4 variables
 
:<math>H(q,u,p,t)= \langle p,u \rangle -L(q,u,t)</math>
 
and the associated conditions for a maximum are
 
:<math>\frac{dp}{dt} = -\frac{\partial H}{\partial q}</math>
 
:<math>\frac{dq}{dt} = ~~\frac{\partial H}{\partial p}</math>
 
:<math>\frac{\partial H}{\partial u} = 0</math>
 
This difference is somewhat confusing, nevertheless a specific problem, such as the [[Brachystochrone]] problem, can be solved by either method. For details, see the article by Sussmann and Willems.<ref>[https://netfiles.uiuc.edu/liberzon/www/teaching/sussmann-willems.pdf Sussmann, Willems: 300 Years of Optimal Control, IEEE Control Systems, June 1997]</ref>
 
==References==
 
{{reflist}}
 
==External links==
* P. Varaiya: ''Lecture Notes on Optimization'', 2d. ed. (1998) [http://paleale.eecs.berkeley.edu/~varaiya/papers_ps.dir/NOO.pdf]
 
* [[I. Michael Ross|I. M. Ross]], Pontryagin's Hamiltonian Illustrated with Examples, 2009,  [http://www.elissarglobal.com/home/get-chapter-2-free/ Chapter 2 download]
 
{{DEFAULTSORT:Hamiltonian (Control Theory)}}
[[Category:Optimal control]]

Latest revision as of 23:05, 4 January 2014

The Hamiltonian of optimal control theory was developed by L. S. Pontryagin as part of his minimum principle.[1] It was inspired by, but is distinct from, the Hamiltonian of classical mechanics. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to minimize the Hamiltonian. For details see Pontryagin's minimum principle.

Notation and Problem statement

A control u(t) is to be chosen so as to minimize the objective function

J(u)=Ψ(x(T))+0TL(x,u,t)dt

where x(t) is the system state, which evolves according to the state equations

x˙=f(x,u,t)x(0)=x0t[0,T]

and the control must satisfy the constraints

au(t)bt[0,T]

Definition of the Hamiltonian

H(x,λ,u,t)=λT(t)f(x,u,t)+L(x,u,t)

where λ(t) is a vector of costate variables of the same dimension as the state variables x(t).

For information on the properties of the Hamiltonian, see Pontryagin's minimum principle.

The Hamiltonian in discrete time

When the problem is formulated in discrete time, the Hamiltonian is defined as:

H(x,λ,u,t)=λT(t+1)f(x,u,t)+L(x,u,t)

and the costate equations are

λ(t)=Hx

(Note that the discrete time Hamiltonian at time t involves the costate variable at time t+1.[2] This small detail is essential so that when we differentiate with respect to x we get a term involving λ(t+1) on the right hand side of the costate equations. Using a wrong convention here can lead to incorrect results, i.e. a costate equation which is not a backwards difference equation).

The Hamiltonian of control compared to the Hamiltonian of mechanics

William Rowan Hamilton defined the Hamiltonian as a function of three variables:

=(p,q,t)=p,q˙L(q,q˙,t)

where q˙ is defined implicitly by

p=Lq˙

Hamilton then formulated his equations as

ddtp(t)=q
ddtq(t)=p

In contrast the Hamiltonian of control theory (as defined by Pontryagin) is a function of 4 variables

H(q,u,p,t)=p,uL(q,u,t)

and the associated conditions for a maximum are

dpdt=Hq
dqdt=Hp
Hu=0

This difference is somewhat confusing, nevertheless a specific problem, such as the Brachystochrone problem, can be solved by either method. For details, see the article by Sussmann and Willems.[3]

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

External links

  • P. Varaiya: Lecture Notes on Optimization, 2d. ed. (1998) [1]
  1. I. M. Ross A Primer on Pontryagin's Principle in Optimal Control, Collegiate Publishers, 2009.
  2. Varaiya, Chapter 6
  3. Sussmann, Willems: 300 Years of Optimal Control, IEEE Control Systems, June 1997