Exchange matrix: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Qetuth
m more specific stub type
 
en>Addbot
m Bot: Migrating 5 interwiki links, now provided by Wikidata on d:q3750339 (Report Errors)
Line 1: Line 1:
I'm Yoshiko Oquendo. Kansas is our beginning place and my parents live nearby. Interviewing is what she does in her day occupation but quickly her spouse and her will start their personal company. The thing I adore most bottle tops gathering and now I have time to consider on new issues.<br><br>Here is my site: extended auto warranty ([http://www.Tobynabors.com/ActivityFeed/MyProfile/tabid/61/userId/46195/Default.aspx look at this now])
In [[Optimization (mathematics)|optimization]],  '''quasi-Newton methods''' (a special case of '''variable metric methods''') are algorithms for finding local [[maxima and minima]] of [[function (mathematics)|functions]]. Quasi-Newton methods are based on
[[Newton's method in optimization|Newton's method]] to find the [[stationary point]] of a function, where the [[gradient]] is 0. Newton's method assumes that the function can be locally approximated as a [[quadratic function|quadratic]] in the region around the optimum, and uses the first and second derivatives to find the stationary point. In higher dimensions, Newton's method uses the gradient and the [[Hessian matrix]] of second [[derivative]]s of the function to be minimized.
In quasi-Newton methods the Hessian matrix does not need to be computed. The Hessian is updated by analyzing successive gradient vectors instead. Quasi-Newton methods are a generalization of the [[secant method]] to find the root of the first derivative for multidimensional problems. In multiple dimensions the secant equation is under-determined, and quasi-Newton methods differ in how they constrain the solution, typically by adding a simple low-rank update to the current estimate of the Hessian.
 
The first quasi-Newton algorithm was proposed by [[William C. Davidon]], a physicist working at [[Argonne National Laboratory]]. He developed the first quasi-Newton algorithm in 1959: the [[DFP updating formula]], which was later popularized by Fletcher and Powell in 1963, but is rarely used today. The most common quasi-Newton algorithms are currently the [[SR1 formula]] (for symmetric rank one), the [[BHHH]] method, the widespread [[BFGS method]] (suggested independently by Broyden, Fletcher, Goldfarb, and Shanno, in 1970), and its low-memory extension, [[L-BFGS]]. The Broyden's class is a linear combination of the DFP and BFGS methods.
 
The SR1 formula does not guarantee the update matrix to maintain [[Positive-definite matrix|positive-definiteness]] and can be used for indefinite problems.  
The [[Broyden's method]] does not require the update matrix to be symmetric and it is used to find the root of a general system of equations (rather than the gradient)
by updating the [[Jacobian matrix and determinant|Jacobian]] (rather than the Hessian).
 
One of the chief advantages of quasi-Newton methods over [[Newton's method in optimization|Newton's method]] is that the [[Hessian matrix]] (or, in the case of quasi-Newton methods, its approximation) <math>B</math> does not need to be inverted. Newton's method, and its derivatives such as [[interior point method]]s, require the Hessian to be inverted, which is typically implemented by solving a [[system of linear equations]] and is often quite costly. In contrast, quasi-Newton methods usually generate an estimate of <math>B^{-1}</math> directly.
 
==Description of the method==
As in [[Newton's method in optimization|Newton's method]], one uses a second order approximation to find the minimum of a function <math>f(x)</math>.
The [[Taylor series]] of <math>f(x)</math> around an iterate is:
::<math>f(x_k+\Delta x) \approx f(x_k)+\nabla f(x_k)^T \Delta x+\frac{1}{2} \Delta x^T {B} \, \Delta x, </math>
where (<math>\nabla f</math>) is the [[gradient]] and <math>B</math> an approximation to the [[Hessian matrix]].
The gradient of this approximation (with respect to <math> \Delta x </math>) is
 
::<math> \nabla f(x_k+\Delta x) \approx \nabla f(x_k)+B \, \Delta x</math>
 
and setting this gradient to zero provides the Newton step:
 
::<math>\Delta x=-B^{-1}\nabla f(x_k), \, </math>
 
The Hessian approximation <math> B </math> is chosen to satisfy
 
::<math>\nabla f(x_k+\Delta x)=\nabla f(x_k)+B \, \Delta x,</math>
 
which is called the ''secant equation'' (the Taylor series of the gradient itself).  In more than one dimension <math>B</math> is [[under determined]]. In one dimension, solving for <math>B</math> and applying the Newton's step with the updated value is equivalent to the [[secant method]].
The various quasi-Newton methods differ in their choice of the solution to the secant equation (in one dimension, all the variants are equivalent).
Most methods (but with exceptions, such as [[Broyden's method]]) seek a symmetric solution (<math>B^T=B</math>); furthermore,
the variants listed below can be motivated by finding an update <math>B_{k+1}</math> that is as close as possible
to <math> B_{k}</math> in some [[Norm (mathematics)|norm]]; that is, <math> B_{k+1} = \textrm{argmin}_B \|B-B_k\|_V </math> where <math>V </math> is some [[positive definite matrix]] that defines the norm.
An approximate initial value of <math>B_0=I * x</math>  is often sufficient to achieve rapid convergence. The unknown <math>x_k</math> is updated applying the Newton's step calculated using the current approximate Hessian matrix <math>B_{k}</math>
* <math>\Delta x_k=- \alpha_k B_k^{-1}\nabla f(x_k)</math>, with <math>\alpha</math> chosen to satisfy the [[Wolfe conditions]];
* <math>x_{k+1}=x_{k}+\Delta x_k</math>;
*The gradient  computed at the new point <math>\nabla f(x_{k+1})</math>, and
:<math>y_k=\nabla f(x_{k+1})-\nabla f(x_k),</math>
is used  to update the approximate Hessian <math>\displaystyle B_{k+1}</math>, or directly its inverse <math>\displaystyle H_{k+1}=B_{k+1}^{-1}</math> using the [[Sherman-Morrison formula]].  
* A key property of the BFGS and DFP updates is that if <math> B_k </math> is positive definite and <math> \alpha_k </math> is chosen to satisfy the Wolfe conditions then <math>\displaystyle  B_{k+1} </math> is also positive definite.
 
The most popular update formulas are:
 
{| class="wikitable"
 
|-
! Method
! <math>\displaystyle B_{k+1}=</math>
! <math>H_{k+1}=B_{k+1}^{-1}=</math>
|-
 
| [[DFP updating formula|DFP]]
 
| <math>\left (I-\frac {y_k \, \Delta x_k^T} {y_k^T \, \Delta x_k} \right ) B_k \left (I-\frac {\Delta x_k y_k^T} {y_k^T \, \Delta x_k} \right )+\frac{y_k y_k^T} {y_k^T \, \Delta x_k}</math>
 
| <math> H_k + \frac {\Delta x_k \Delta x_k^T}{y_k^{T} \, \Delta x_k} - \frac {H_k y_k y_k^T H_k^T} {y_k^T H_k y_k}</math>
 
|-
 
| [[BFGS method|BFGS]]
 
| <math> B_k + \frac {y_k y_k^T}{y_k^{T} \Delta x_k} - \frac {B_k \Delta x_k (B_k \Delta x_k)^T} {\Delta x_k^{T} B_k \, \Delta x_k}</math>
 
| <math> \left (I-\frac {y_k \Delta x_k^T} {y_k^T \Delta x_k} \right )^T H_k \left (I-\frac { y_k \Delta x_k^T} {y_k^T \Delta x_k} \right )+\frac
{\Delta x_k \Delta x_k^T} {y_k^T \, \Delta x_k}</math>
 
|-
| [[Broyden's method|Broyden]]
 
| <math> B_k+\frac {y_k-B_k \Delta x_k}{\Delta x_k^T \, \Delta x_k} \, \Delta x_k^T  </math>
 
|<math>H_{k}+\frac {(\Delta x_k-H_k y_k) \Delta x_k^T H_k}{\Delta x_k^T H_k \, y_k}</math>
 
|-
 
| Broyden family
 
| <math>(1-\varphi_k) B_{k+1}^{BFGS}+ \varphi_k B_{k+1}^{DFP}, \qquad \varphi\in[0,1]</math>
 
|
|-
 
| [[SR1 formula|SR1]]
 
| <math>B_{k}+\frac {(y_k-B_k \, \Delta x_k) (y_k-B_k \, \Delta x_k)^T}{(y_k-B_k \, \Delta x_k)^T \, \Delta x_k}</math>
 
| <math>H_{k}+\frac {(\Delta x_k-H_k y_k) (\Delta x_k-H_k y_k)^T}{(\Delta x_k-H_k y_k)^T y_k}</math>
 
|}
 
==Implementations==
Owing to their success, there are implementations of quasi-Newton methods in almost all programming languages. The [[NAG Numerical Library|NAG Library]] contains several routines<ref>{{ cite web | last = The Numerical Algorithms Group | first = | title = Keyword Index: Quasi-Newton | date = | work = NAG Library Manual, Mark 23 | url = http://www.nag.co.uk/numeric/fl/nagdoc_fl23/html/INDEXES/KWIC/quasi-newton.html | accessdate = 2012-02-09 }}</ref> for minimizing or maximizing a function<ref>{{ cite web | last = The Numerical Algorithms Group | first = | title = E04 – Minimizing or Maximizing a Function | date = | work = NAG Library Manual, Mark 23 | url = http://www.nag.co.uk/numeric/fl/nagdoc_fl23/pdf/E04/e04intro.pdf  | accessdate = 2012-02-09 }}</ref> which use quasi-Newton algorithms.
 
In MATLAB's [[Optimization Toolbox]], the <code>[http://www.mathworks.com/help/toolbox/optim/ug/fminunc.html fminunc]</code> function uses (among other methods) the [[BFGS]] Quasi-Newton method.  Many of the [http://www.mathworks.com/help/toolbox/optim/ug/brnoxzl.html constrained methods] of the Optimization toolbox use [[BFGS]] and the variant [[L-BFGS]]. Many user-contributed quasi-Newton routines are available on MATLAB's [http://www.mathworks.com/matlabcentral/fileexchange/?term=BFGS file exchange].
 
[[Mathematica]] includes [http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationQuasiNewtonMethods.html quasi-Newton solvers]. [[R (programming language)|R]]'s <code>optim</code> general-purpose optimizer routine uses the [[BFGS]] method by using <code>method="BFGS"</code><cite>[http://finzi.psych.upenn.edu/R/library/stats/html/optim.html]</cite>. In the [[SciPy]] extension to [[Python (programming language)|Python]], the [http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html <code>scipy.optimize.minimize</code>] function includes, among other methods, a [[BFGS]] implementation.
 
==See also==
* [[Newton's method in optimization]]
* [[Newton's method]]
* [[DFP updating formula]]
* [[BFGS method]]
:*[[Limited-memory BFGS|L-BFGS]]
:*[[Orthant-wise limited-memory quasi-Newton|OWL-QN]]
* [[SR1 formula]]
* [[Broyden's Method]]
 
== References ==
 
{{reflist}}
 
== Further reading ==
* Bonnans, J. F., Gilbert, J.Ch., [[Claude Lemaréchal|Lemaréchal, C.]] and Sagastizábal, C.A. (2006), ''Numerical optimization, theoretical and numerical aspects.'' Second edition. Springer. ISBN 978-3-540-35445-1.
* William C. Davidon, [http://link.aip.org/link/?SJE/1/1/1 Variable Metric Method for Minimization], SIOPT Volume 1 Issue 1, Pages 1–17, 1991.
* {{Citation | last1=Fletcher | first1=Roger | title=Practical methods of optimization | publisher=[[John Wiley & Sons]] | location=New York | edition=2nd | isbn=978-0-471-91547-8 | year=1987}}.
* Nocedal, Jorge & Wright, Stephen J. (1999). Numerical Optimization. Springer-Verlag. ISBN 0-387-98793-2.
*{{Cite book | last1=Press | first1=WH | last2=Teukolsky | first2=SA | last3=Vetterling | first3=WT | last4=Flannery | first4=BP | year=2007 | title=Numerical Recipes: The Art of Scientific Computing | edition=3rd | publisher=Cambridge University Press |  publication-place=New York | isbn=978-0-521-88068-8 | chapter=Section 10.9. Quasi-Newton or Variable Metric Methods in Multidimensions | chapter-url=http://apps.nrbook.com/empanel/index.html#pg=521}}
<!-- * Edwin K.P.Chong and Stanislaw H.Zak, An Introduction to Optimization 2ed, John Wiley & Sons Pte. Ltd. August 2001. -->
 
{{Optimization algorithms|unconstrained}}
 
[[Category:Optimization algorithms and methods]]

Revision as of 01:08, 28 February 2013

In optimization, quasi-Newton methods (a special case of variable metric methods) are algorithms for finding local maxima and minima of functions. Quasi-Newton methods are based on Newton's method to find the stationary point of a function, where the gradient is 0. Newton's method assumes that the function can be locally approximated as a quadratic in the region around the optimum, and uses the first and second derivatives to find the stationary point. In higher dimensions, Newton's method uses the gradient and the Hessian matrix of second derivatives of the function to be minimized.

In quasi-Newton methods the Hessian matrix does not need to be computed. The Hessian is updated by analyzing successive gradient vectors instead. Quasi-Newton methods are a generalization of the secant method to find the root of the first derivative for multidimensional problems. In multiple dimensions the secant equation is under-determined, and quasi-Newton methods differ in how they constrain the solution, typically by adding a simple low-rank update to the current estimate of the Hessian.

The first quasi-Newton algorithm was proposed by William C. Davidon, a physicist working at Argonne National Laboratory. He developed the first quasi-Newton algorithm in 1959: the DFP updating formula, which was later popularized by Fletcher and Powell in 1963, but is rarely used today. The most common quasi-Newton algorithms are currently the SR1 formula (for symmetric rank one), the BHHH method, the widespread BFGS method (suggested independently by Broyden, Fletcher, Goldfarb, and Shanno, in 1970), and its low-memory extension, L-BFGS. The Broyden's class is a linear combination of the DFP and BFGS methods.

The SR1 formula does not guarantee the update matrix to maintain positive-definiteness and can be used for indefinite problems. The Broyden's method does not require the update matrix to be symmetric and it is used to find the root of a general system of equations (rather than the gradient) by updating the Jacobian (rather than the Hessian).

One of the chief advantages of quasi-Newton methods over Newton's method is that the Hessian matrix (or, in the case of quasi-Newton methods, its approximation) B does not need to be inverted. Newton's method, and its derivatives such as interior point methods, require the Hessian to be inverted, which is typically implemented by solving a system of linear equations and is often quite costly. In contrast, quasi-Newton methods usually generate an estimate of B1 directly.

Description of the method

As in Newton's method, one uses a second order approximation to find the minimum of a function f(x). The Taylor series of f(x) around an iterate is:

f(xk+Δx)f(xk)+f(xk)TΔx+12ΔxTBΔx,

where (f) is the gradient and B an approximation to the Hessian matrix. The gradient of this approximation (with respect to Δx) is

f(xk+Δx)f(xk)+BΔx

and setting this gradient to zero provides the Newton step:

Δx=B1f(xk),

The Hessian approximation B is chosen to satisfy

f(xk+Δx)=f(xk)+BΔx,

which is called the secant equation (the Taylor series of the gradient itself). In more than one dimension B is under determined. In one dimension, solving for B and applying the Newton's step with the updated value is equivalent to the secant method. The various quasi-Newton methods differ in their choice of the solution to the secant equation (in one dimension, all the variants are equivalent). Most methods (but with exceptions, such as Broyden's method) seek a symmetric solution (BT=B); furthermore, the variants listed below can be motivated by finding an update Bk+1 that is as close as possible to Bk in some norm; that is, Bk+1=argminBBBkV where V is some positive definite matrix that defines the norm. An approximate initial value of B0=I*x is often sufficient to achieve rapid convergence. The unknown xk is updated applying the Newton's step calculated using the current approximate Hessian matrix Bk

yk=f(xk+1)f(xk),

is used to update the approximate Hessian Bk+1, or directly its inverse Hk+1=Bk+11 using the Sherman-Morrison formula.

  • A key property of the BFGS and DFP updates is that if Bk is positive definite and αk is chosen to satisfy the Wolfe conditions then Bk+1 is also positive definite.

The most popular update formulas are:

Method Bk+1= Hk+1=Bk+11=
DFP (IykΔxkTykTΔxk)Bk(IΔxkykTykTΔxk)+ykykTykTΔxk Hk+ΔxkΔxkTykTΔxkHkykykTHkTykTHkyk
BFGS Bk+ykykTykTΔxkBkΔxk(BkΔxk)TΔxkTBkΔxk (IykΔxkTykTΔxk)THk(IykΔxkTykTΔxk)+ΔxkΔxkTykTΔxk
Broyden Bk+ykBkΔxkΔxkTΔxkΔxkT Hk+(ΔxkHkyk)ΔxkTHkΔxkTHkyk
Broyden family (1φk)Bk+1BFGS+φkBk+1DFP,φ[0,1]
SR1 Bk+(ykBkΔxk)(ykBkΔxk)T(ykBkΔxk)TΔxk Hk+(ΔxkHkyk)(ΔxkHkyk)T(ΔxkHkyk)Tyk

Implementations

Owing to their success, there are implementations of quasi-Newton methods in almost all programming languages. The NAG Library contains several routines[1] for minimizing or maximizing a function[2] which use quasi-Newton algorithms.

In MATLAB's Optimization Toolbox, the fminunc function uses (among other methods) the BFGS Quasi-Newton method. Many of the constrained methods of the Optimization toolbox use BFGS and the variant L-BFGS. Many user-contributed quasi-Newton routines are available on MATLAB's file exchange.

Mathematica includes quasi-Newton solvers. R's optim general-purpose optimizer routine uses the BFGS method by using method="BFGS"[1]. In the SciPy extension to Python, the scipy.optimize.minimize function includes, among other methods, a BFGS implementation.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

Further reading

  • Bonnans, J. F., Gilbert, J.Ch., Lemaréchal, C. and Sagastizábal, C.A. (2006), Numerical optimization, theoretical and numerical aspects. Second edition. Springer. ISBN 978-3-540-35445-1.
  • William C. Davidon, Variable Metric Method for Minimization, SIOPT Volume 1 Issue 1, Pages 1–17, 1991.
  • Many property agents need to declare for the PIC grant in Singapore. However, not all of them know find out how to do the correct process for getting this PIC scheme from the IRAS. There are a number of steps that you need to do before your software can be approved.

    Naturally, you will have to pay a safety deposit and that is usually one month rent for annually of the settlement. That is the place your good religion deposit will likely be taken into account and will kind part or all of your security deposit. Anticipate to have a proportionate amount deducted out of your deposit if something is discovered to be damaged if you move out. It's best to you'll want to test the inventory drawn up by the owner, which can detail all objects in the property and their condition. If you happen to fail to notice any harm not already mentioned within the inventory before transferring in, you danger having to pay for it yourself.

    In case you are in search of an actual estate or Singapore property agent on-line, you simply should belief your intuition. It's because you do not know which agent is nice and which agent will not be. Carry out research on several brokers by looking out the internet. As soon as if you end up positive that a selected agent is dependable and reliable, you can choose to utilize his partnerise in finding you a home in Singapore. Most of the time, a property agent is taken into account to be good if he or she locations the contact data on his website. This may mean that the agent does not mind you calling them and asking them any questions relating to new properties in singapore in Singapore. After chatting with them you too can see them in their office after taking an appointment.

    Have handed an trade examination i.e Widespread Examination for House Brokers (CEHA) or Actual Property Agency (REA) examination, or equal; Exclusive brokers are extra keen to share listing information thus making certain the widest doable coverage inside the real estate community via Multiple Listings and Networking. Accepting a severe provide is simpler since your agent is totally conscious of all advertising activity related with your property. This reduces your having to check with a number of agents for some other offers. Price control is easily achieved. Paint work in good restore-discuss with your Property Marketing consultant if main works are still to be done. Softening in residential property prices proceed, led by 2.8 per cent decline within the index for Remainder of Central Region

    Once you place down the one per cent choice price to carry down a non-public property, it's important to accept its situation as it is whenever you move in – faulty air-con, choked rest room and all. Get round this by asking your agent to incorporate a ultimate inspection clause within the possibility-to-buy letter. HDB flat patrons routinely take pleasure in this security net. "There's a ultimate inspection of the property two days before the completion of all HDB transactions. If the air-con is defective, you can request the seller to repair it," says Kelvin.

    15.6.1 As the agent is an intermediary, generally, as soon as the principal and third party are introduced right into a contractual relationship, the agent drops out of the image, subject to any problems with remuneration or indemnification that he could have against the principal, and extra exceptionally, against the third occasion. Generally, agents are entitled to be indemnified for all liabilities reasonably incurred within the execution of the brokers´ authority.

    To achieve the very best outcomes, you must be always updated on market situations, including past transaction information and reliable projections. You could review and examine comparable homes that are currently available in the market, especially these which have been sold or not bought up to now six months. You'll be able to see a pattern of such report by clicking here It's essential to defend yourself in opposition to unscrupulous patrons. They are often very skilled in using highly unethical and manipulative techniques to try and lure you into a lure. That you must also protect your self, your loved ones, and personal belongings as you'll be serving many strangers in your home. Sign a listing itemizing of all of the objects provided by the proprietor, together with their situation. HSR Prime Recruiter 2010.
  • Nocedal, Jorge & Wright, Stephen J. (1999). Numerical Optimization. Springer-Verlag. ISBN 0-387-98793-2.
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534

Template:Optimization algorithms