Invariable plane: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>ZéroBot
m r2.7.1) (Robot: Adding pl:Płaszczyzna niezmienna Laplace'a; removing: zh:不变平面
 
en>ClueBot NG
m Reverting possible vandalism by 207.38.142.245 to version by Mark Arsten. False positive? Report it. Thanks, ClueBot NG. (1569540) (Bot)
 
Line 1: Line 1:
Hello and welcome. My name is Figures Wunder. To do aerobics is a thing that I'm completely addicted to. I used to be unemployed but now I am a librarian and the salary has been really satisfying. California is our birth location.<br><br>Review my webpage: [http://www.makeatribute.com/members/steph7383/activity/359440/?q=activity%2Fp%2F359440%2F makeatribute.com]
{{multiple issues|
{{Technical|date=June 2013}}
{{More footnotes|date=February 2012}}
}}
 
'''Convex minimization''', a subfield of [[mathematical optimization|optimization]], studies the problem of minimizing [[convex function]]s over [[convex set]]s. The convexity property can make optimization in some sense "easier" than the general case - for example, any [[local optimum|local minimum]] must be a [[global optimum|global minimum]].
 
Given a [[real number|real]] [[vector space]] <math>X</math> together with a [[convex function|convex]], real-valued [[function (mathematics)|function]]
:<math>f:\mathcal{X}\to \mathbb{R}</math>
 
defined on a [[convex set|convex subset]] <math>\mathcal{X}</math> of <math>X</math>, the problem is to find any point <math>x^\ast</math> in <math>\mathcal{X}</math> for which the number <math>f(x)</math> is smallest, i.e., a point <math>x^\ast</math> such that
 
:<math>f(x^\ast) \le f(x)</math> for all <math>x \in \mathcal{X}</math>.
 
The convexity of <math>f</math> makes the powerful tools of [[convex analysis]] applicable. In <!-- locally convex F- --> finite-dimensional [[normed space]]s, the [[Hahn–Banach theorem]] and the existence of [[subgradient]]s lead to a particularly satisfying theory of [[necessary and sufficient conditions]] for optimality, a [[dual problem|duality theory]] generalizing that for [[linear programming]], and effective computational methods.
 
Convex minimization has applications in a wide range of disciplines, such as automatic [[control systems]], estimation and [[signal processing]], communications and networks, electronic [[circuit design]], data analysis and modeling, [[statistics]] ([[optimal design]]), and [[finance]]. With recent improvements in computing and in optimization theory, convex minimization is nearly as straightforward as [[linear programming]]. Many optimization problems can be reformulated as convex minimization problems. For example, the problem of ''maximizing'' a ''concave'' function ''f'' can be re-formulated equivalently as a problem of ''minimizing'' the function -''f'', which is ''convex''.
 
==Convex optimization problem==
An ''optimization problem'' (also referred to as a ''mathematical programming problem'' or ''minimization problem'') of finding some <math>x^\ast \in \mathcal{X}</math> such that
:<math>f(x^\ast) = \min \{f(x):x \in \mathcal{X}\},</math>
where <math>\mathcal{X} \subset \mathbb{R}^n</math> is the ''feasible set'' and <math>f(x):\mathbb{R}^n \rightarrow \mathbb{R}</math> is the ''objective'', is called ''convex'' if <math>\mathcal{X}</math> is a closed convex set and <math>f(x)</math> is convex on <math>\mathbb{R}^n</math>.
<ref>{{cite book |last1=Hiriart-Urruty |first1=Jean-Baptiste |last2= Lemaréchal |first2=Claude |title=Convex analysis and minimization algorithms: Fundamentals |page=291 |year=1996 |url=http://books.google.de/books?id=Gdl4Jc3RVjcC&printsec=frontcover&dq=lemarechal+convex+analysis+and+minimization&hl=de&sa=X&ei=E602T4GXGMzQsgaPtJ2VDA&ved=0CDUQ6AEwAA#v=onepage&q=convex%20minimization&f=false}}</ref>
<ref>{{cite book |last1=Ben-Tal |first1=Aharon |last2= Nemirovskiĭ |first2=Arkadiĭ Semenovich |title=Lectures on modern convex optimization: analysis, algorithms, and engineering applications |pages=335–336 |year=2001 |url=http://books.google.de/books?id=M3MqpEJ3jzQC&printsec=frontcover&dq=Lectures+on+Modern+Convex+Optimization:+Analysis,+Algorithms,&hl=de&sa=X&ei=26c2T6G7HYrIswac0d2uDA&ved=0CDIQ6AEwAA#v=onepage&q=convex%20programming&f=false}}</ref>
 
Alternatively, an optimization problem on the form
: <math>\begin{align}
&\operatorname{minimize}& & f(x) \\
&\operatorname{subject\;to}
& &g_i(x) \leq 0, \quad i = 1,\dots,m
\end{align}</math>
is called convex if the functions <math>f, g_1 \ldots g_m : \mathbb{R}^n \rightarrow \mathbb{R}</math> are convex.<ref>Boyd/Vandenberghe, p. 7</ref>
 
==Theory==
The following statements are true about the convex minimization problem:
* if a [[local minimum]] exists, then it is a [[global minimum]].
* the set of all (global) minima is convex.
* for each ''strictly'' convex function, if the function has a minimum, then the minimum is unique.
 
These results are used by the theory of convex minimization along with geometric notions from [[functional analysis]] (in Hilbert spaces) such as the [[Hilbert projection theorem]], the [[separating hyperplane theorem]], and [[Farkas' lemma]].
 
==Standard form==
''Standard form'' is the usual and most intuitive form of describing a convex minimization problem. It consists of the following three parts:
* A '''convex function''' <math>f(x): \mathbb{R}^n \to \mathbb{R}</math> to be minimized over the variable <math>x</math>
* '''Inequality constraints''' of the form <math>g_i(x) \leq 0</math>, where the functions <math>g_i</math> are convex
* '''Equality constraints''' of the form <math>h_i(x) = 0</math>, where  the functions <math>h_i</math> are [[affine transformation|affine]]. In practice, the terms "linear" and "affine" are often used interchangeably. Such constraints can be expressed in the form <math>h_i(x) = a_i^T x + b_i</math>, where <math>a_i</math> is a column-vector and <math>b_i</math> a real number.
 
A convex minimization problem is thus written as
 
: <math>\begin{align}
&\underset{x}{\operatorname{minimize}}& & f(x) \\
&\operatorname{subject\;to}
& &g_i(x) \leq 0, \quad i = 1,\dots,m \\
&&&h_i(x) = 0, \quad i = 1, \dots,p.
\end{align}</math>
 
Note that every equality constraint <math>h(x) = 0</math> can be equivalently replaced by a pair of inequality constraints <math>h(x)\leq 0</math> and <math>-h(x)\leq 0</math>. Therefore, for theoretical purposes, equality constraints are redundant; however, it can be beneficial to treat them specially in practice.
 
Following from this fact, it is easy to understand why <math>h_i(x) = 0</math> has to be affine as opposed to merely being convex. If  <math>h_i(x)</math> is convex,  <math>h_i(x) \leq 0</math> is convex, but <math>-h_i(x) \leq 0</math> is ''concave''. Therefore, the only way for  <math>h_i(x) = 0</math> to be convex is for  <math>h_i(x)</math> to be affine.
 
==Examples==
The following problems are all convex minimization problems, or can be transformed into convex minimizations problems via a change of variables:
 
*[[Least squares]]
*[[Linear programming]]
* Convex [[quadratic programming|quadratic minimization]] with linear constraints
*[[Quadratically constrained quadratic programming|Quadratically constrained Convex-quadratic minimization with convex quadratic constraints]]
*[[Conic optimization]]
*[[Geometric programming]]
*[[Second order cone programming]]
*[[Semidefinite programming]]
*[[Entropy maximization]] with appropriate constraints
 
==Lagrange multipliers==
Consider a convex minimization problem given in standard form by a cost function <math>f(x)</math> and inequality constraints <math>g_i(x)\leq 0</math>, where <math>i=1\ldots m</math>. Then the domain <math>\mathcal{X}</math> is:
 
:<math>\mathcal{X} = \left\lbrace{x\in X \vert g_1(x)\le0, \ldots, g_m(x)\le0}\right\rbrace.</math>
 
The [[Lagrange multipliers|Lagrangian function]] for the problem is
 
:''L''(''x'',''&lambda;''<sub>0</sub>,...,''&lambda;''<sub>''m''</sub>) = ''&lambda;''<sub>0</sub>''f''(''x'') + ''&lambda;''<sub>1</sub>''g''<sub>1</sub>(''x'')  + ... + ''&lambda;''<sub>''m''</sub>''g''<sub>''m''</sub>(''x'').
 
For each point ''x'' in ''X'' that minimizes ''f'' over ''X'', there exist real numbers ''λ''<sub>0</sub>, ..., ''λ''<sub>m</sub>, called [[Lagrange multipliers]],
that satisfy these conditions simultaneously:
 
# ''x'' minimizes ''L''(''y'', λ<sub>0</sub>, λ<sub>1</sub>, ..., λ<sub>m</sub>) over all ''y'' in ''X'',
# λ<sub>0</sub> ≥ 0, λ<sub>1</sub> ≥ 0, ..., λ<sub>''m''</sub> ≥ 0, with at least one λ<sub>''k''</sub>&gt;0,
# ''&lambda;''<sub>1</sub>''g''<sub>1</sub>(''x'') = 0, ..., ''λ''<sub>''m''</sub>''g''<sub>''m''</sub>(''x'') = 0 (complementary slackness).
 
If there exists a "strictly feasible point", i.e., a point ''z'' satisfying
:''g''<sub>1</sub>(''z'') < 0,...,''g''<sub>''m''</sub>(''z'') < 0,
then the statement above can be upgraded to assert that λ<sub>0</sub>=1.
 
Conversely, if some ''x'' in ''X'' satisfies 1-3 for [[scalar (mathematics)|scalar]]s λ<sub>0</sub>, ..., λ<sub>''m''</sub> with λ<sub>0</sub> = 1, then ''x'' is certain to minimize ''f'' over ''X''.
 
==Methods==
Convex minimization problems can be solved by the following contemporary methods:<ref>For methods for convex minimization, see the volumes by Hiriart-Urruty and Lemaréchal (bundle) and the textbooks by [[Andrzej Piotr Ruszczyński|Ruszczyński]] and
Boyd and Vandenberghe (interior point).
</ref>
* "Bundle methods" (Wolfe, Lemaréchal, Kiwiel), and
* [[Subgradient method#Subgradient-projection & bundle methods|Subgradient projection]] methods (Polyak),
* [[Interior-point methods]] (Nemirovskii and Nesterov).
Other methods of interest: 
*[[Cutting-plane methods]]
*[[Ellipsoid method]]
*[[Subgradient method]]
*[[Drift plus penalty|Dual subgradients and the drift-plus-penalty method]]
Subgradient methods can be implemented simply and so are widely used.<ref>Bertsekas</ref>   Dual subgradient methods are subgradient methods applied to a [[Duality (optimization)|dual problem]].  The [[Drift plus penalty|drift-plus-penalty]] method is similar to the dual subgradient method, but takes a time average of the primal variables.
 
==Convex minimization with good complexity: Self-concordant barriers==
The efficiency of iterative methods is poor for the class of convex problems, because this class includes "bad guys" whose minimum cannot be approximated without a large number of function and subgradient evaluations;<ref>{{harvtxt|Hiriart-Urruty|Lemaréchal|1993|loc=Example XV.1.1.2, p. 277}} discuss a "bad guy" constructed by Arkadi Nemirovskii.</ref>  thus, to have practically appealing efficiency results, it is necessary to make additional restrictions on the class of problems.  Two such classes are problems special [[barrier function]]s, first ''self-concordant'' barrier functions, according to the theory of Nesterov and Nemirovskii, and second ''self-regular'' barrier functions according to the theory of Terlaky and coauthors.
 
==Quasiconvex minimization==
Problems with convex level sets can be efficiently minimized, in theory.
Yuri Nesterov proved that quasi-convex minimization problems could be solved efficiently, and his results were extended by Kiwiel.<ref>In [[computational complexity|theory]], quasiconvex programming and convex programming problems can be solved in reasonable amount of time, where the number of iterations grows like a polynomial in the dimension of the problem (and in the reciprocal of the approximation error tolerated):<p>
{{Cite news|last=Kiwiel|first=Krzysztof C.|title=Convergence and efficiency of subgradient methods for quasiconvex minimization|journal=Mathematical Programming  (Series A)|publisher=Springer|location=Berlin, Heidelberg|issn=0025-5610|pages=1–25|volume=90|issue=1|doi=10.1007/PL00011414|year=2001|mr=1819784}} Kiwiel acknowledges that [[Yuri Nesterov (mathematician)|Yuri Nesterov]] first established that quasiconvex minimization problems can be solved efficiently.</ref> However, such theoretically "efficient" methods use "divergent-series" [[gradient descent#Stepsize_rules|stepsize rule]]s, which were first developed for classical [[subgradient method]]s. Classical subgradient methods using divergent-series rules are much slower than modern methods of convex minimization, such as subgradient projection methods, [[bundle method]]s of descent, and nonsmooth [[filter method]]s.
 
Solving even close-to-convex but non-convex problems can be computationally intractable. Minimizing a unimodal function is intractable, regardless of the smoothness of the function, according to results of Ivanov.<ref>Nemirovskii and Judin</ref>
 
==Convex maximization==
Conventionally, the definition of the convex optimization problem (we recall) requires that the objective function ''f'' to be ''minimized'' and the feasible set be convex. In the  special case of linear programming (LP), the objective function is both concave and convex, and so LP can also consider the problem of maximizing an objective function without confusion. However, for most convex minimization problems, the objective function is not concave, and therefore a problem and then such problems are formulated in the standard form of convex optimization problems, that is, minimizing the convex objective function.
 
For nonlinear convex minimization, the associated maximization problem obtained by substituting the [[supremum]] operator for the [[infimum]] operator  is not a problem of convex optimization, as conventionally defined. However, it is studied in the larger field of convex optimization as a problem of convex maximization.<ref>Convex maximization is mentioned in the subsection on convex optimization in this textbook:
[http://books.google.se/books?id=9sbsMkuFzhYC&pg=PA206&dq=%22convex+maximization%22,+%22convex+minimization%22+OR+%22convex+optimization%22&hl=sv&sa=X&ei=YswrT8-kGqfV4QTKs8CwDg&ved=0CF8Q6AEwCA#v=onepage&q=%22convex%20maximization%22%2C%20%22convex%20minimization%22%20OR%20%22convex%20optimization%22&f=false  Ulrich Faigle, Walter Kern, and George Still. ''Algorithmic principles of mathematical programming''. Springer-Verlag. Texts in Mathematics. Chapter 10.2, Subsection "Convex optimization", pages 205-206.]
</ref>
 
The convex maximization problem is especially important for studying the existence of maxima.  Consider the restriction of a convex function to a [[compact set|compact]] convex set: Then, on that set, the function attains its constrained ''maximum'' only on the boundary.<ref>Theorem 32.1 in Rockafellar's ''Convex Analysis'' states this [[maximum principle]] for extended real-valued functions.</ref> Such results, called "[[maximum principle]]s", are useful in the theory of [[harmonic functions]], [[potential theory]], and [[partial differential equation]]s.
 
The problem of minimizing a [[quadratic polynomial|quadratic]] [[multivariate polynomial]] on a [[cube]] is [[NP-hard]].<ref>Sahni, S.  "Computationally related problems," in SIAM Journal on Computing, 3, 262--279, 1974.</ref>  In fact, in the [[quadratic programming|quadratic minimization]] problem, if the matrix has only one negative [[eigenvalue]], is [[NP-hard]].<ref>Quadratic programming with one negative eigenvalue is NP-hard, Panos M. Pardalos and Stephen A. Vavasis in ''Journal of Global Optimization'', Volume 1, Number 1, 1991, pg.15-22.</ref>
 
==Extensions==
Advanced treatments consider convex functions that can attain positive infinity, also; the [[characteristic function (convex analysis)|indicator function]] of convex analysis is zero for every  <math>x\in\mathcal{X}</math> and positive infinity otherwise.
 
Extensions of convex functions include [[Biconvex optimization|biconvex]], [[pseudo-convex function|pseudo-convex]], and [[quasi-convex function]]s. Partial extensions of the theory of convex analysis and iterative methods for approximately solving non-convex minimization problems occur in the field of [[Convexity_(mathematics)#Generalizations_and_extensions_for_convexity|generalized convexity]] ("abstract convex analysis").
 
==See also==
* [[Duality (optimization)|Duality]]
* [[Karush–Kuhn–Tucker conditions]]
* [[Optimization problem]]
* [[Proximal Gradient Methods]]
 
==Notes==
<references/>
 
==References==
* {{cite book
  | last = Bertsekas
  | first = Dimitri
|authorlink= Dimitri Bertsekas
  | title = Convex Analysis and Optimization
  | publisher = Athena Scientific
  | year = 2003
}}
 
* {{cite book|title=Convex Optimization|first1=Stephen P.|last1=Boyd|first2=Lieven|last2=Vandenberghe|year=2004|publisher=Cambridge University Press|isbn=978-0-521-83378-3|url=http://www.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf|format=pdf|accessdate=October 15, 2011}}
 
* [[Jonathan M. Borwein|Borwein, Jonathan]], and Lewis, Adrian. (2000). ''Convex Analysis and Nonlinear Optimization''. Springer.
 
* Hiriart-Urruty, Jean-Baptiste, and [[Claude Lemaréchal|Lemaréchal, Claude]]. (2004). ''Fundamentals of Convex analysis''. Berlin: Springer.
 
* {{cite book|last1=Hiriart-Urruty|first1=Jean-Baptiste|last2=Lemaréchal|first2=Claude|title=Convex analysis and minimization algorithms, Volume&nbsp;I: Fundamentals|series=Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]|volume=305|publisher=Springer-Verlag|location=Berlin|year=1993|pages=xviii+417|isbn=3-540-56850-6|mr=1261420|authorlink2=Claude Lemaréchal}}
*{{cite book|last1=Hiriart-Urruty|first1=Jean-Baptiste|last2=Lemaréchal|first2=Claude|title=Convex analysis and minimization algorithms, Volume&nbsp;II: Advanced theory and bundle methods|series=Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]|volume=306|publisher=Springer-Verlag|location=Berlin|year=1993|pages=xviii+346|isbn=3-540-56852-2|ref=harv|mr=1295240|unused_data=<!-- authorlink2=Claude Lemaréchal -->}}
 
* {{cite book|first=Krzysztof C.|last=Kiwiel|title=Methods of Descent for Nondifferentiable Optimization|year=1985|publisher=Springer-Verlag|location= New York|series=Lecture Notes in Mathematics|isbn=978-3-540-15642-0 |ref=harv}}
 
* {{cite book|last=Lemaréchal|first=Claude|chapter=Lagrangian relaxation|pages=112–156|title=Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May&nbsp;15–19,&nbsp;2000|editor=Michael Jünger and Denis Naddef|series=Lecture Notes in Computer Science|volume=2241|publisher=Springer-Verlag|location=Berlin|year=2001|isbn=3-540-42877-1|mr=1900016|doi=10.1007/3-540-45586-8_4|authorlink=Claude Lemaréchal}}
 
* Nesterov, Y. and Nemirovsky, A. (1994). 'Interior Point Polynomial Methods in Convex Programming.'' SIAM
 
* Nesterov, Yurii. (2004). ''Introductory Lectures on Convex Optimization'',  Kluwer Academic Publishers
 
* {{cite book
  | last = Rockafellar
  | first = R. T.
  | authorlink = R. Tyrrell Rockafellar
  | title = Convex analysis
  | publisher = Princeton University Press
  | year = 1970
  | location = Princeton
}}
 
* {{cite book
  | last = Ruszczyński
  | first = Andrzej
  | authorlink = Andrzej Piotr Ruszczyński
  | title = Nonlinear Optimization
  | publisher = Princeton University Press
  | year = 2006
}}
 
==External links==
* Stephen Boyd and Lieven Vandenberghe, [http://www.stanford.edu/~boyd/cvxbook/ ''Convex optimization'']  (book in pdf)
* [http://www.stanford.edu/class/ee364a/ EE364a: Convex Optimization I] and [http://www.stanford.edu/class/ee364b/ EE364b: Convex Optimization II], Stanford course homepages
*[http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-253-convex-analysis-and-optimization-spring-2010/ 6.253: Convex Analysis and Optimization], an MIT OCW course homepage
* Brian Borchers, [http://infohost.nmt.edu/~borchers/presentation.pdf An overview of software for convex optimization]
 
{{Optimization algorithms|convex}}
 
[[Category:Mathematical optimization]]
[[Category:Convex analysis]]
[[Category:Convex optimization| ]]

Latest revision as of 02:42, 5 November 2013

Template:Multiple issues

Convex minimization, a subfield of optimization, studies the problem of minimizing convex functions over convex sets. The convexity property can make optimization in some sense "easier" than the general case - for example, any local minimum must be a global minimum.

Given a real vector space together with a convex, real-valued function

defined on a convex subset of , the problem is to find any point in for which the number is smallest, i.e., a point such that

for all .

The convexity of makes the powerful tools of convex analysis applicable. In finite-dimensional normed spaces, the Hahn–Banach theorem and the existence of subgradients lead to a particularly satisfying theory of necessary and sufficient conditions for optimality, a duality theory generalizing that for linear programming, and effective computational methods.

Convex minimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, statistics (optimal design), and finance. With recent improvements in computing and in optimization theory, convex minimization is nearly as straightforward as linear programming. Many optimization problems can be reformulated as convex minimization problems. For example, the problem of maximizing a concave function f can be re-formulated equivalently as a problem of minimizing the function -f, which is convex.

Convex optimization problem

An optimization problem (also referred to as a mathematical programming problem or minimization problem) of finding some such that

where is the feasible set and is the objective, is called convex if is a closed convex set and is convex on . [1] [2]

Alternatively, an optimization problem on the form

is called convex if the functions are convex.[3]

Theory

The following statements are true about the convex minimization problem:

  • if a local minimum exists, then it is a global minimum.
  • the set of all (global) minima is convex.
  • for each strictly convex function, if the function has a minimum, then the minimum is unique.

These results are used by the theory of convex minimization along with geometric notions from functional analysis (in Hilbert spaces) such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas' lemma.

Standard form

Standard form is the usual and most intuitive form of describing a convex minimization problem. It consists of the following three parts:

A convex minimization problem is thus written as

Note that every equality constraint can be equivalently replaced by a pair of inequality constraints and . Therefore, for theoretical purposes, equality constraints are redundant; however, it can be beneficial to treat them specially in practice.

Following from this fact, it is easy to understand why has to be affine as opposed to merely being convex. If is convex, is convex, but is concave. Therefore, the only way for to be convex is for to be affine.

Examples

The following problems are all convex minimization problems, or can be transformed into convex minimizations problems via a change of variables:

Lagrange multipliers

Consider a convex minimization problem given in standard form by a cost function and inequality constraints , where . Then the domain is:

The Lagrangian function for the problem is

L(x,λ0,...,λm) = λ0f(x) + λ1g1(x) + ... + λmgm(x).

For each point x in X that minimizes f over X, there exist real numbers λ0, ..., λm, called Lagrange multipliers, that satisfy these conditions simultaneously:

  1. x minimizes L(y, λ0, λ1, ..., λm) over all y in X,
  2. λ0 ≥ 0, λ1 ≥ 0, ..., λm ≥ 0, with at least one λk>0,
  3. λ1g1(x) = 0, ..., λmgm(x) = 0 (complementary slackness).

If there exists a "strictly feasible point", i.e., a point z satisfying

g1(z) < 0,...,gm(z) < 0,

then the statement above can be upgraded to assert that λ0=1.

Conversely, if some x in X satisfies 1-3 for scalars λ0, ..., λm with λ0 = 1, then x is certain to minimize f over X.

Methods

Convex minimization problems can be solved by the following contemporary methods:[4]

Other methods of interest:

Subgradient methods can be implemented simply and so are widely used.[5] Dual subgradient methods are subgradient methods applied to a dual problem. The drift-plus-penalty method is similar to the dual subgradient method, but takes a time average of the primal variables.

Convex minimization with good complexity: Self-concordant barriers

The efficiency of iterative methods is poor for the class of convex problems, because this class includes "bad guys" whose minimum cannot be approximated without a large number of function and subgradient evaluations;[6] thus, to have practically appealing efficiency results, it is necessary to make additional restrictions on the class of problems. Two such classes are problems special barrier functions, first self-concordant barrier functions, according to the theory of Nesterov and Nemirovskii, and second self-regular barrier functions according to the theory of Terlaky and coauthors.

Quasiconvex minimization

Problems with convex level sets can be efficiently minimized, in theory. Yuri Nesterov proved that quasi-convex minimization problems could be solved efficiently, and his results were extended by Kiwiel.[7] However, such theoretically "efficient" methods use "divergent-series" stepsize rules, which were first developed for classical subgradient methods. Classical subgradient methods using divergent-series rules are much slower than modern methods of convex minimization, such as subgradient projection methods, bundle methods of descent, and nonsmooth filter methods.

Solving even close-to-convex but non-convex problems can be computationally intractable. Minimizing a unimodal function is intractable, regardless of the smoothness of the function, according to results of Ivanov.[8]

Convex maximization

Conventionally, the definition of the convex optimization problem (we recall) requires that the objective function f to be minimized and the feasible set be convex. In the special case of linear programming (LP), the objective function is both concave and convex, and so LP can also consider the problem of maximizing an objective function without confusion. However, for most convex minimization problems, the objective function is not concave, and therefore a problem and then such problems are formulated in the standard form of convex optimization problems, that is, minimizing the convex objective function.

For nonlinear convex minimization, the associated maximization problem obtained by substituting the supremum operator for the infimum operator is not a problem of convex optimization, as conventionally defined. However, it is studied in the larger field of convex optimization as a problem of convex maximization.[9]

The convex maximization problem is especially important for studying the existence of maxima. Consider the restriction of a convex function to a compact convex set: Then, on that set, the function attains its constrained maximum only on the boundary.[10] Such results, called "maximum principles", are useful in the theory of harmonic functions, potential theory, and partial differential equations.

The problem of minimizing a quadratic multivariate polynomial on a cube is NP-hard.[11] In fact, in the quadratic minimization problem, if the matrix has only one negative eigenvalue, is NP-hard.[12]

Extensions

Advanced treatments consider convex functions that can attain positive infinity, also; the indicator function of convex analysis is zero for every and positive infinity otherwise.

Extensions of convex functions include biconvex, pseudo-convex, and quasi-convex functions. Partial extensions of the theory of convex analysis and iterative methods for approximately solving non-convex minimization problems occur in the field of generalized convexity ("abstract convex analysis").

See also

Notes

  1. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  2. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  3. Boyd/Vandenberghe, p. 7
  4. For methods for convex minimization, see the volumes by Hiriart-Urruty and Lemaréchal (bundle) and the textbooks by Ruszczyński and Boyd and Vandenberghe (interior point).
  5. Bertsekas
  6. Template:Harvtxt discuss a "bad guy" constructed by Arkadi Nemirovskii.
  7. In theory, quasiconvex programming and convex programming problems can be solved in reasonable amount of time, where the number of iterations grows like a polynomial in the dimension of the problem (and in the reciprocal of the approximation error tolerated):

    Template:Cite news Kiwiel acknowledges that Yuri Nesterov first established that quasiconvex minimization problems can be solved efficiently.

  8. Nemirovskii and Judin
  9. Convex maximization is mentioned in the subsection on convex optimization in this textbook: Ulrich Faigle, Walter Kern, and George Still. Algorithmic principles of mathematical programming. Springer-Verlag. Texts in Mathematics. Chapter 10.2, Subsection "Convex optimization", pages 205-206.
  10. Theorem 32.1 in Rockafellar's Convex Analysis states this maximum principle for extended real-valued functions.
  11. Sahni, S. "Computationally related problems," in SIAM Journal on Computing, 3, 262--279, 1974.
  12. Quadratic programming with one negative eigenvalue is NP-hard, Panos M. Pardalos and Stephen A. Vavasis in Journal of Global Optimization, Volume 1, Number 1, 1991, pg.15-22.

References

  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • Borwein, Jonathan, and Lewis, Adrian. (2000). Convex Analysis and Nonlinear Optimization. Springer.
  • Hiriart-Urruty, Jean-Baptiste, and Lemaréchal, Claude. (2004). Fundamentals of Convex analysis. Berlin: Springer.
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • Nesterov, Y. and Nemirovsky, A. (1994). 'Interior Point Polynomial Methods in Convex Programming. SIAM
  • Nesterov, Yurii. (2004). Introductory Lectures on Convex Optimization, Kluwer Academic Publishers
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534

External links

Template:Optimization algorithms