Control theory: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Rjwilmsi
m →‎Main control strategies: Journal cites, added 1 DOI using AWB (9887)
en>Limit-theorem
Line 1: Line 1:
{{About||control theory in linguistics|control (linguistics)|control theory in psychology and sociology|control theory (sociology)|and|Perceptual control theory}}
== 秦Yuは思慮深くうなずいた ==


[[File:Feedback loop with descriptions.svg|thumb|right|400px|The concept of the feedback loop to control the dynamic behavior of the system: this is negative feedback, because the sensed value is subtracted from the desired value to create the error signal, which is amplified by the controller.]]
ではそう思います [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_8.php クリスチャンルブタン パンプス]。<br><br>「しかし火の紫色のビーズソースだけ神がポール雪の殺し屋は、あなたの貴重​​な品物ほど良好ではない程度に、不分明Lingbao三ストリームすることができます。」Forberダンシャオは言った [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_7.php クリスチャンルブタン 取扱店]<br><br>秦Yuは思慮深くうなずいた。<br>彼はあいまいLingbaoのみ神々のパワーよりも、必ずしも強い固有のあいまいオーラを表す知っているこの点を<br>で、不分明Lingbaoのレベルに依存します。香港孟Lingbaoこの三流ではなく、何もない [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_12.php クリスチャンルブタン ブーツ]<br><br>Forber音が急に変化し、言った:使用するかのアプローチがわからない、 [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_5.php クリスチャンルブタン 店舗] 'しかし、火の9紫色のビーズ源は、多くのことを精錬の所有者だった、所有者は9紫色のビーズがほとんど、大きなパワーを火の源を融合させてみましょう寺院はファンの中核となっているが、増加します。 '<br><br>「コア? '秦Yuは考えているようだ何を感じています。<br><br>Forberはうなずい:「過去の紫色のビーズのこれら9個のソースは精錬火災、火災の家にある、所有者が去ったときどの火災からのエネルギーの抽出は、混合デバイスの所有者のためにそれを容易にすることができるので、これらの9つを置く [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_6.php クリスチャンルブタン 銀座]。火器管制の源に紫色のビーズ
'''Control theory''' is an [[Multi-disciplinary Engineering|interdisciplinary branch of engineering]] and [[mathematics]] that deals with the behavior of [[dynamical system]]s with inputs. The external input of a system is called the ''[[reference]]''. When one or more output variables of a system need to follow a certain reference over time, a [[Controller (control theory)|controller]] manipulates the inputs to a system to obtain the desired effect on the output of the system.
相关的主题文章:
 
  <ul>
The usual objective of a control theory is to calculate solutions for the proper corrective action from the controller that result in system stability, that is, the system will hold the set point and not oscillate around it.
 
 
  <li>[http://ww.kuronowish.com/~kinki_railmodel/cgi-bin/bbs/joyful.cgi http://ww.kuronowish.com/~kinki_railmodel/cgi-bin/bbs/joyful.cgi]</li>
The inputs and outputs of a continuous control system are generally related by differential equations. If these are linear with constant
 
coefficients, then a transfer function relating the input and output can be obtained by taking their [[Laplace transform]].
  <li>[http://www.gxfsjj.gov.cn/guestbook/home.php?mod=space&uid=81949 http://www.gxfsjj.gov.cn/guestbook/home.php?mod=space&uid=81949]</li>
If the differential equations are nonlinear and have a known solution, then it may be possible to linearize the nonlinear differential equations at that solution.<ref>[http://www.mathworks.com/help/toolbox/simulink/slref/trim.html trim point]</ref>
 
If the resulting linear differential equations have constant coefficients, then one can take their Laplace transform to obtain a transfer function.
  <li>[http://www.bacc.net.cn/plus/feedback.php?aid=12 http://www.bacc.net.cn/plus/feedback.php?aid=12]</li>
 
 
The [[transfer function]] is also known as the system function or network function. The transfer function is a mathematical representation, in terms of spatial or temporal frequency, of the relation between the input and output of a linear time-invariant solution of the nonlinear differential equations describing the system.
</ul>
 
Extensive use is usually made of a diagrammatic style known as the [[block diagram]].
 
==Overview==
[[File:Smooth nonlinear trajectory planning control on a dual pendula system.png|thumbnail|Smooth nonlinear trajectory planning with linear quadratic Gaussian feedback (LQR) control on a dual pendula system.]]
 
Control theory is
* a theory that deals with influencing the behavior of [[dynamical system]]s
* an interdisciplinary subfield of science, which originated in [[engineering]] and [[mathematics]], and evolved into use by the social sciences, such as [[psychology]], [[control theory (sociology)|sociology]], [[criminology]] and in the [[financial system]].
Control systems may be thought of as having four functions: Measure, Compare, Compute, and Correct. These four functions are completed by five elements: [[Detector]], [[Transducer]], [[Transmitter]], [[Controller (control theory)|Controller]], and Final Control Element. The measuring function is completed by the detector, transducer and transmitter. In practical applications these three elements are typically contained in one unit. A standard example of a measuring unit is a [[resistance thermometer]]. The compare and compute functions are completed within the controller, which may be implemented electronically by [[proportional control]], a [[PI controller]], [[PID controller]], bistable, hysteretic control or [[programmable logic controller]].  Older controller units have been mechanical, as in a [[centrifugal governor]] or a [[carburetor]]. The correct function is completed with a final control element. The final control element changes an input or output in the control system that affects the manipulated or controlled variable.
 
===An example===
Consider a car's [[cruise control]], which is a device designed to maintain vehicle speed at a constant ''desired'' or ''reference'' speed provided by the driver. The ''controller'' is the cruise control, the ''plant'' is the car,  and the ''system'' is the car and the cruise control. The system output is the car's speed, and the control itself is the engine's [[throttle]] position which determines how much power the engine generates.
 
A primitive way to implement cruise control is simply to lock the throttle position when the driver engages cruise control.  However, if the cruise control is engaged on a stretch of flat road, then the car will travel slower going uphill and faster when going downhill. This type of controller is called an [[open-loop controller]] because no measurement of the system output (the car's speed) is used to alter the control (the throttle position.) As a result, the controller cannot compensate for changes acting on the car, like a change in the slope of the road.
 
In a '''closed-loop control system''', a sensor monitors the system output (the car's speed) and feeds the data to a controller which adjusts the control (the throttle position) as necessary to maintain the desired system output (match the car's speed to the reference speed.) Now, when the car goes uphill, the decrease in speed is measured, and the throttle position changed to increase engine power, speeding up the vehicle. Feedback from measuring the car's speed has allowed the controller to dynamically compensate for changes to the car's speed. It is from this feedback that the paradigm of the control ''loop'' arises: the control affects the system output, which in turn is measured and looped back to alter the control.
 
==History==
[[File:Boulton and Watt centrifugal governor-MJ.jpg|thumb|right|[[Centrifugal governor]] in a [[Boulton & Watt engine]] of 1788]]
 
Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the [[centrifugal governor]], conducted by the physicist [[James Clerk Maxwell]] in 1868, entitled ''On Governors''.<ref name=Maxwell1867>{{cite journal
| author = Maxwell, J.C.
| year = 1868
| title = On Governors
| journal = Proceedings of the Royal Society of London
| volume = 16
| pages = 270–283
| accessdate = 2008-04-14
| doi = 10.1098/rspl.1867.0055
| jstor=112510
}}</ref> This described and analyzed the phenomenon of [[self-oscillation]], in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, [[Edward John Routh]], abstracted Maxwell's results for the general class of linear systems.<ref name=Routh1975>{{cite book
| author = Routh, E.J.
| coauthors = Fuller, A.T.
| year = 1975
| title = Stability of motion
| publisher = Taylor & Francis
| isbn =
}}</ref> Independently, [[Adolf Hurwitz]] analyzed system stability using differential equations in 1877, resulting in what is now known as the [[Routh–Hurwitz theorem]].<ref name=Routh1877>{{cite book
| author = Routh, E.J.
| year = 1877
| title = A Treatise on the Stability of a Given State of Motion, Particularly Steady Motion: Particularly Steady Motion
| publisher = Macmillan and co.
| isbn =
}}</ref><ref name=Hurwitz1964>{{cite journal
| author = Hurwitz, A.
| year = 1964
| title = On The Conditions Under Which An Equation Has Only Roots With Negative Real Parts
| journal = Selected Papers on Mathematical Trends in Control Theory
}}</ref>
 
A notable application of dynamic control was in the area of manned flight. The [[Wright brothers]] made their first successful test flights on December 17, 1903 and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds.
 
By [[World War II]], control theory was an important part of [[fire-control system]]s, [[guidance system]]s and [[electronics]].
 
Sometimes mechanical methods are used to improve the stability of systems. For example, [[Stabilizer (ship)|ship stabilizers]] are fins mounted beneath the waterline and emerging laterally.  In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship.
 
The [[Sidewinder missile]] uses small control surfaces placed at the rear of the missile with spinning disks on their outer surfaces; these are known as [[rolleron]]s. Airflow over the disks spins them to a high speed. If the missile starts to roll, the gyroscopic force of the disks drives the control surface into the airflow, cancelling the motion. Thus, the Sidewinder team replaced a potentially complex control system with a simple mechanical solution.
 
The [[Space Race]] also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as [[economics]].
 
==People in systems and control==
{{Main|People in systems and control}}
Many active and historical figures made significant contribution to control theory, including, for example:
* [[Pierre-Simon Laplace]] (1749-1827) invented the [[Z-transform]] in his work on [[probability theory]], now used to solve discrete-time control theory problems. The Z-transform is a discrete-time equivalent of the [[Laplace transform]] which is named after him.
* [[Alexander Lyapunov]] (1857–1918) in the 1890s marks the beginning of [[stability theory]].
* [[Harold Stephen Black|Harold S. Black]] (1898–1983), invented the concept of [[negative feedback amplifier]]s in 1927. He managed to develop stable negative feedback amplifiers in the 1930s.
* [[Harry Nyquist]] (1889–1976), developed the [[Nyquist stability criterion]] for feedback systems in the 1930s.
* [[Richard Bellman]] (1920–1984), developed [[dynamic programming]] since the 1940s.
* [[Andrey Kolmogorov]] (1903–1987) co-developed the [[Wiener filter|Wiener–Kolmogorov filter]] (1941).
* [[Norbert Wiener]] (1894–1964) co-developed the Wiener–Kolmogorov filter and coined the term [[cybernetics]] in the 1940s.
* [[John R. Ragazzini]] (1912–1988) introduced [[digital control]] and the use of [[Z-transform]] in control theory (invented by Laplace) in the 1950s.
* [[Lev Pontryagin]] (1908–1988) introduced the [[Pontryagin's minimum principle|maximum principle]] and the [[Bang-bang control|bang-bang principle]].
 
==Classical control theory==
To overcome the limitations of the open-loop controller, control theory introduces [[feedback]].
A closed-loop [[controller (control theory)|controller]] uses feedback to control [[state (controls)|states]] or [[output]]s of a [[dynamical system]]. Its name comes from the information path in the system: process inputs (e.g., [[voltage]] applied to an [[electric motor]]) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with [[sensor]]s and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.
 
Closed-loop controllers have the following advantages over [[open-loop controller]]s:
* disturbance rejection (such as hills in the cruise control example above)
* guaranteed performance even with [[mathematical model|model]] uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact
* [[instability|unstable]] processes can be stabilized
* reduced sensitivity to parameter variations
* improved reference tracking performance
 
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed [[feed forward (control)|feedforward]] and serves to further improve reference tracking performance.
 
A common closed-loop controller architecture is the [[PID controller]].
 
===Closed-loop transfer function===
{{details|Closed-loop transfer function}}
The output of the system ''y(t)'' is fed back through a sensor measurement ''F'' to the reference value ''r(t)''. The controller ''C'' then takes the error ''e'' (difference) between the reference and the output to change the inputs ''u'' to the system under control ''P''. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller.
 
This is called a single-input-single-output (''SISO'') control system; ''MIMO'' (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through [[coordinate vector|vectors]] instead of simple [[scalar (mathematics)|scalar]] values. For some [[distributed parameter systems]] the vectors may be infinite-[[Dimension (vector space)|dimensional]] (typically functions).
 
[[File:simple feedback control loop2.svg|center|A simple feedback control loop]]
 
If we assume the controller ''C'', the plant ''P'', and the sensor ''F'' are [[linear]] and [[time-invariant]] (i.e., elements of their [[transfer function]] ''C(s)'', ''P(s)'', and ''F(s)'' do not depend on time), the systems above can be analysed using the [[Laplace transform]] on the variables. This gives the following relations:
 
: <math>Y(s) = P(s) U(s)\,\!</math>
: <math>U(s) = C(s) E(s)\,\!</math>
: <math>E(s) = R(s) - F(s)Y(s).\,\!</math>
 
Solving for ''Y''(''s'') in terms of ''R''(''s'') gives:
 
: <math>Y(s) = \left( \frac{P(s)C(s)}{1 + F(s)P(s)C(s)} \right) R(s) = H(s)R(s).</math>
 
The expression <math>H(s) = \frac{P(s)C(s)}{1 + F(s)P(s)C(s)}</math> is referred to as the ''closed-loop transfer function'' of the system. The numerator is the forward (open-loop) gain from ''r'' to ''y'', and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If <math>|P(s)C(s)| \gg 1</math>, i.e., it has a large [[norm (mathematics)|norm]] with each value of ''s'', and if <math>|F(s)| \approx 1</math>, then ''Y(s)'' is approximately equal to ''R(s)'' and the output closely tracks the reference input.
 
===PID controller===
{{details|PID controller}}
The [[PID controller]] is probably the most-used feedback control design. ''PID'' is an acronym for ''Proportional-Integral-Derivative'', referring to the three terms operating on the error signal to produce a control signal. If ''u(t)'' is the control signal sent to the system, ''y(t)'' is the measured output and ''r(t)'' is the desired output, and tracking error <math>e(t)=r(t)- y(t)</math>, a PID controller has the general form
 
:<math>u(t) =  K_P e(t) + K_I \int e(t)\text{d}t + K_D \frac{\text{d}}{\text{d}t}e(t).</math>
 
The desired closed loop dynamics is obtained by adjusting the three parameters <math> K_P</math>, <math> K_I</math> and <math> K_D</math>, often iteratively by "tuning" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in [[process control]]). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well established class of control systems: however, they cannot be used in several more complicated cases, especially if [[MIMO]] systems are considered.
 
Applying Laplace transformation results in the transformed PID controller equation
 
:<math>u(s) =  K_P e(s) + K_I \frac{1}{s} e(s) + K_D s e(s)</math>
:<math>u(s) =  \left(K_P + K_I \frac{1}{s} + K_D s\right) e(s)</math>
 
with the PID controller transfer function
:<math>C(s) = \left(K_P + K_I \frac{1}{s} + K_D s\right).</math>
 
It should be noted that for practical PID controllers a pure differentiator is neither physically realisable nor desirable due to amplification of noise and resonant modes in the system. Therefore a phase-lead compensator type approach is used instead, or a differentiator with low-pass roll-off.
 
==Modern control theory==
In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain [[state space (controls)|state space]] representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With  inputs and outputs, we would otherwise have to write down  Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a vector within that space.<ref>{{cite book|title=State space & linear systems|series=Schaum's outline series |publisher=McGraw Hill|author=Donald M Wiberg|isbn=0-07-070096-6}}</ref>
 
==Topics in control theory==
 
===Stability===
 
The ''stability'' of a general [[dynamical system]] with no input can be described with [[Lyapunov stability]] criteria. A [[linear system]] that takes an input is called [[BIBO stability|bounded-input bounded-output (BIBO) stable]] if its output will stay [[bounded function|bounded]] for any bounded input. Stability for [[nonlinear system]]s that take an input is [[input-to-state stability]] (ISS), which combines Lyapunov stability and a notion similar to BIBO stability. For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems.
 
Mathematically, this means that for a causal linear system to be stable all of the [[Pole (complex analysis)|poles]] of its [[transfer function]] must have negative-real values, i.e. the real part of all the poles are less than zero. Practically speaking, stability requires that the transfer function complex poles reside
* in the open left half of the [[complex plane]] for continuous time, when the [[Laplace transform]] is used to obtain the transfer function.
* inside the [[unit circle]] for discrete time, when the [[Z-transform]] is used.
The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in [[Cartesian coordinates]] where the <math>x</math> axis is the real axis and the discrete Z-transform is in [[circular coordinates]] where the <math>\rho</math> axis is the real axis.
 
When the appropriate conditions above are satisfied a system is said to be [[asymptotic stability|asymptotically stable]]: the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is [[marginal stability|marginally stable]]: in this case the system transfer function has non-repeated poles at complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero.
 
If a system in question has an [[impulse response]] of
 
:<math>\ x[n] = 0.5^n u[n]</math>
 
then the Z-transform (see [[Z-transform#Example 2 (causal ROC)|this example]]), is given by
 
:<math>\ X(z) = \frac{1}{1 - 0.5z^{-1}}\ </math>
 
which has a pole in <math>z = 0.5</math> (zero [[imaginary number|imaginary part]]). This system is BIBO (asymptotically) stable since the pole is ''inside'' the unit circle.
 
However, if the impulse response was
 
:<math>\ x[n] = 1.5^n u[n]</math>
 
then the Z-transform is
 
:<math>\ X(z) = \frac{1}{1 - 1.5z^{-1}}\ </math>
 
which has a pole at <math>z = 1.5</math> and is not BIBO stable since the pole has a modulus strictly greater than one.
 
Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the [[root locus]], [[Bode plot]]s or the [[Nyquist plot]]s.
 
Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll.
 
===Controllability and observability===
{{Main| Controllability|Observability}}
 
[[Controllability]] and [[observability]] are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed Stabilizable. Observability instead is related to the possibility of "observing", through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behaviour of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable.
 
From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behaviour in the closed-loop system. That is, if one of the [[eigenvalues]] of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis.
 
Solutions to problems of uncontrollable or unobservable system include adding actuators and sensors.
 
===Control specification===
Several different control strategies have been devised in the past years. These vary from extremely general ones ([[PID controller]]), to others devoted to very particular classes of systems (especially [[robotics]] or [[aircraft]] cruise control).
 
A control problem can have several specifications. Stability, of course, is always present: the controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have <math>Re[\lambda] < -\overline{\lambda}</math>, where <math>\overline{\lambda}</math> is a fixed value strictly greater than zero, instead of simply asking that <math>Re[\lambda]<0</math>.
 
Another typical specification is the rejection of a step disturbance; including an [[integrator]] in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included.
 
Other "classical" control theory specifications regard the time-response of the closed-loop system: these include the [[rise time]] (the time needed by the control system to reach the desired value after a perturbation), peak [[overshoot (signal)|overshoot]] (the highest value reached by the response before reaching the desired value) and others ([[settling time]], quarter-decay). Frequency domain specifications are usually related to [[robust control|robustness]] (see after).
 
Modern performance assessments use some variation of integrated tracking error (IAE,ISA,CQI).
 
===Model identification and robustness===
A control system must always have some robustness property. A [[robust control]]ler is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This specification is important: no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise the true system dynamics can be so complicated that a complete model is impossible.
 
;System identification
{{details|System identification}}
The process of determining the equations that govern the model's dynamics is called [[system identification]]. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its [[transfer function]] or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations: for example, in the case of a [[Damping#Example: mass–spring–damper|mass-spring-damper]] system we know that <math> m \ddot{{x}}(t) = - K x(t) - \Beta \dot{x}(t)</math>. Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to physical system with true parameter values away from nominal.
 
Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running: in this way, if a drastic variation of the parameters ensues (for example, if the robot's arm releases a weight), the controller will adjust itself consequently in order to ensure the correct performance.
 
;Analysis
Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using [[Nyquist diagram|Nyquist]] and [[Bode diagram]]s. Topics include [[Bode plot#Gain margin and phase margin|gain and phase margin]] and amplitude margin. For MIMO (multi input multi output) and, in general, more complicated control systems one must consider the theoretical results devised for each control technique (see next section): i.e., if particular robustness qualities are needed, the engineer must shift his attention to a control technique by including them in its properties.
 
;Constraints
A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system: for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: [[model predictive control]] (see later), and [[anti-wind up system (control)|anti-wind up systems]]. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold.
 
==System classifications==
 
===Linear systems control===
{{Main|State space (controls)}}
For MIMO systems, pole placement can be performed mathematically using a [[State space (controls)|state space representation]] of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design.
 
===Nonlinear systems control===
{{Main|Nonlinear control}}
Processes in industries like [[robotics]] and the [[aerospace industry]] typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., [[feedback linearization]], [[backstepping]],  [[sliding mode control]], trajectory linearization control normally take advantage of results based on [[Lyapunov's theory]]. [[Differential geometry]] has been widely used as a tool for generalizing well-known linear control concepts to the non-linear case, as well as showing the subtleties that make it a more challenging problem.
 
===Decentralized systems===
{{Main|Distributed control system}}
When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions.
 
==Main control strategies==
Every control system must guarantee first the stability of the closed-loop behavior. For [[linear system]]s, this can be obtained by directly placing the poles. Non-linear control systems use specific theories (normally based on [[Aleksandr Lyapunov]]'s Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen. Here a summary list of the main control techniques is shown:
 
[[Adaptive control]] uses on-line identification of the process parameters, or modification of controller gains, thereby obtaining strong robustness properties. Adaptive controls were applied for the first time in the [[aerospace industry]] in the 1950s, and have found particular success in that field.
 
A [[hierarchical control system]] is a type of [[control system]] in which a set of devices and governing software is arranged in a [[hierarchical]] [[tree (data structure)|tree]].  When the links in the tree are implemented by a [[computer network]], then that hierarchical control system is also a form of [[Networked control system]].
 
[[Intelligent control]] uses various AI computing approaches like [[neural networks]], [[Bayesian probability]], [[fuzzy logic]],<ref>{{cite journal | title=A novel fuzzy framework for nonlinear system control| journal=Fuzzy Sets and Systems | year=2010 | last=Liu | coauthors=Wang, Golnaraghi, Kubica | volume=161 | issue=21 | pages=2746–2759 | first1=Jie | doi=10.1016/j.fss.2010.04.009}}</ref> [[machine learning]], [[evolutionary computation]] and [[genetic algorithms]] to control a [[dynamic system]].
 
[[Optimal control]] is a particular control technique in which the control signal optimizes a certain "cost index": for example, in the case of a satellite, the jet thrusts needed to bring it to desired trajectory that consume the least amount of fuel. Two optimal control design methods have been widely used in industrial applications, as it has been shown they can guarantee closed-loop stability. These are [[Model Predictive Control]] (MPC) and [[linear-quadratic-Gaussian control]] (LQG). The first can more explicitly take into account constraints on the signals in the system, which is an important feature in many industrial processes. However, the "optimal control" structure in MPC is only a means to achieve such a result, as it does not optimize a true performance index of the closed-loop control system. Together with PID controllers, MPC systems are the most widely used control technique in [[process control]].
 
[[Robust control]] deals explicitly with uncertainty in its approach to controller design.  Controllers designed using ''robust control'' methods tend to be able to cope with small differences between the true system and the nominal model used for design.  The early methods of [[Hendrik Wade Bode|Bode]] and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness.  A modern example of a robust control technique is [[H-infinity loop-shaping]] developed by [[Duncan McFarlane]] and [[Keith Glover]] of [[Cambridge University]], [[United Kingdom]].  Robust methods aim to achieve robust performance and/or [[Stability theory|stability]] in the presence of small modeling errors.
 
[[Stochastic control]] deals with control design with uncertainty in the model. In typical stochastic control problems, it is assumed that there exist random noise and disturbances in the model and the controller, and the control design must take into account these random deviations.
 
[[Energy-shaping control]] view the plant and the controller as energy-transformation devices. The control strategy is formulated in terms of  interconnection (in a power-preserving manner) in order to achieve a desired behavior.
 
[[Self-organized criticality control]] may be defined as attempts to interfere in the processes by which the [[self-organized]] system dissipates energy.
 
==See also==
{{multicol}}
;Examples of control systems
* [[Automation]]
* [[Deadbeat controller]]
* [[Distributed parameter systems]]
* [[Fractional-order control]]
* [[H-infinity loop-shaping]]
* [[Hierarchical control system]]
* [[Model predictive control]]
* [[PID controller]]
* [[Process control]]
* [[Robust control]]
* [[Servomechanism]]
* [[State space (controls)]]
* [[Vector control (motor)|Vector control]]
 
{{multicol-break}}
;Topics in control theory
* [[Coefficient diagram method]]
* [[Control reconfiguration]]
* [[Feedback]]
* [[H infinity]]
* [[Hankel singular value]]
* [[Krener's theorem]]
* [[Lead-lag compensator]]
* [[Minor loop feedback]]
* [[Positive systems]]
* [[Radial basis function]]
* [[Root locus]]
* [[Signal-flow graph]]s
* [[Stable polynomial]]
* [[State space representation]]
* [[Underactuation]]
* [[Youla–Kucera parametrization]]
 
{{multicol-break}}
{{Portal|Systems science}}
;Other related topics
* [[Automation and Remote Control]]
* [[Bond graph]]
* [[Control engineering]]
* [[Control–feedback–abort loop]]
* [[Controller (control theory)]]
* [[Cybernetics]]
* [[Intelligent control]]
* [[Mathematical system theory]]
* [[Negative feedback amplifier]]
* [[People in systems and control]]
* [[Perceptual control theory]]
* [[Systems theory]]
* [[Time scale calculus]]
{{multicol-end}}
 
==References==
{{Reflist}}
 
==Further reading==
*{{cite book
| editor-last  = Levine
| editor-first = William S.
| title        = The Control Handbook
| publisher    = CRC Press
| place        = New York
| year        = 1996
| isbn        = 978-0-8493-8570-4
}}
*{{cite book | author= Karl J. Åström and Richard M. Murray| year = 2008 | title = Feedback Systems: An Introduction for Scientists and Engineers.| publisher = Princeton University Press | url = http://www.cds.caltech.edu/~murray/books/AM08/pdf/am08-complete_28Sep12.pdf | isbn = 0-691-13576-2}}
*{{cite book | author= Christopher Kilian | title= Modern Control Technology | publisher= Thompson Delmar Learning | year= 2005 | isbn=1-4018-5806-6 }}
*{{cite book | author= Vannevar Bush | title= Operational Circuit Analysis | publisher= John Wiley and Sons, Inc. | year= 1929 }}
*{{cite book | author= Robert F. Stengel | title= Optimal Control and Estimation | publisher= Dover Publications | year= 1994 | isbn=0-486-68200-5}}
*{{cite book |last=Franklin et al. |first= |authorlink= |coauthors= |editor= |others= |title=Feedback Control of Dynamic Systems |origdate= |origyear= |origmonth= |url= |accessdate= |edition=4 |date= |year=2002 |month= |publisher=Prentice Hall |location=New Jersey |language= |isbn=0-13-032393-4 |doi = |pages= |chapter= |chapterurl= |quote = }}
*{{cite book | author= Joseph L. Hellerstein, Dawn M. Tilbury, and Sujay Parekh | title= Feedback Control of Computing Systems | publisher= John Wiley and Sons | year= 2004 | isbn=0-471-26637-X}}
*{{cite book | author= [[Diederich Hinrichsen]] and Anthony J. Pritchard | title= Mathematical Systems Theory I - Modelling, State Space Analysis, Stability and Robustness | publisher= Springer | year= 2005 | isbn=3-540-44125-5}}
*{{cite journal  | author = Andrei, Neculai  | title = Modern Control Theory - A historical Perspective  | version =  | publisher =  | year = 2005  | url = http://camo.ici.ro/neculai/history.pdf  | accessdate = 2007-10-10 }}
*{{cite book | last = Sontag | first = Eduardo | authorlink = Eduardo D. Sontag | year = 1998 | title = Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition | publisher = Springer | url = http://www.math.rutgers.edu/~sontag/FTP_DIR/sontag_mathematical_control_theory_springer98.pdf | isbn = 0-387-98489-5}}
*{{cite book | last = Goodwin | first = Graham | authorlink = | year = 2001 | title = Control System Design | publisher = Prentice Hall | isbn = 0-13-958653-9}}
*{{cite book | author= Christophe Basso | year = 2012 | title = Designing Control Loops for Linear and Switching Power Supplies: A Tutorial Guide.| publisher = Artech House | url = http://cbasso.pagesperso-orange.fr/Spice.htm | isbn = 978-1608075577}}
 
For Chemical Engineering
*{{cite book | last = Luyben | first = William | authorlink = | year = 1989 | title = Process Modeling, Simulation, and Control for Chemical Engineers | publisher = Mc Graw Hill | isbn = 0-07-039159-9}}
 
==External links==
{{Wikibooks|Control Systems}}
{{Commons category|Control theory}}
* [http://www.engin.umich.edu/class/ctms/ Control Tutorials for Matlab] - A set of worked through control examples solved by several different methods.
* [http://www.controlguru.com Control Tuning and Best Practices]
* [http://www.PIDlab.com Advanced control structures, free on-line simulators explaining the control theory]
 
 
{{Cybernetics}}
{{Systems}}
{{Mathematics-footer}}
 
{{DEFAULTSORT:Control Theory}}
[[Category:Control theory| ]]
[[Category:Cybernetics]]
[[Category:Formal sciences]]

Revision as of 20:23, 1 March 2014

秦Yuは思慮深くうなずいた

ではそう思います クリスチャンルブタン パンプス

「しかし火の紫色のビーズソースだけ神がポール雪の殺し屋は、あなたの貴重​​な品物ほど良好ではない程度に、不分明Lingbao三ストリームすることができます。」Forberダンシャオは言った クリスチャンルブタン 取扱店

秦Yuは思慮深くうなずいた。
彼はあいまいLingbaoのみ神々のパワーよりも、必ずしも強い固有のあいまいオーラを表す知っているこの点を
で、不分明Lingbaoのレベルに依存します。香港孟Lingbaoこの三流ではなく、何もない クリスチャンルブタン ブーツ

Forber音が急に変化し、言った:使用するかのアプローチがわからない、 クリスチャンルブタン 店舗 'しかし、火の9紫色のビーズ源は、多くのことを精錬の所有者だった、所有者は9紫色のビーズがほとんど、大きなパワーを火の源を融合させてみましょう寺院はファンの中核となっているが、増加します。 '

「コア? '秦Yuは考えているようだ何を感じています。

Forberはうなずい:「過去の紫色のビーズのこれら9個のソースは精錬火災、火災の家にある、所有者が去ったときどの火災からのエネルギーの抽出は、混合デバイスの所有者のためにそれを容易にすることができるので、これらの9つを置く クリスチャンルブタン 銀座。火器管制の源に紫色のビーズ 相关的主题文章: