# Fundamental theorem of linear programming

The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

In applied mathematics, the fundamental theorem of linear programming, in a weak formulation, states that the maxima and minima of a linear function over a convex polygonal region occur at the region's corners. Further, if an extreme value occurs at two corners, then it must also occur everywhere on the line segment between them.

## Statement

Consider the optimization problem

${\displaystyle \min c^{T}x{\text{ subject to }}x\in P}$

Where ${\displaystyle P=\{x\in {\mathbb {R} }^{n}:Ax\leq b\}}$. If ${\displaystyle P}$ is a bounded polyhedron (and thus a polytope) and ${\displaystyle x^{\ast }}$ is an optimal solution to the problem, then ${\displaystyle x^{\ast }}$ is either an extreme point (vertex) of ${\displaystyle P}$, or lies on a face ${\displaystyle F\subset P}$ of optimal solutions.

## Proof

Suppose, for the sake of contradiction, that ${\displaystyle x^{\ast }\in \mathrm {int} (P)}$. Then there exists some ${\displaystyle \epsilon >0}$ such that the ball of radius ${\displaystyle \epsilon }$ centered at ${\displaystyle x^{\ast }}$ is contained in ${\displaystyle P}$, that is ${\displaystyle B_{\epsilon }(x^{\ast })\subset P}$. Therefore,

${\displaystyle x^{\ast }-{\frac {\epsilon }{2}}{\frac {c}{||c||}}\in P}$ and
${\displaystyle c^{T}\left(x^{\ast }-{\frac {\epsilon }{2}}{\frac {c}{||c||}}\right)=c^{T}x^{\ast }-{\frac {\epsilon }{2}}{\frac {c^{T}c}{||c||}}=c^{T}x^{\ast }-{\frac {\epsilon }{2}}||c||

Hence ${\displaystyle x^{\ast }}$ is not an optimal solution, a contradiction. Therefore, ${\displaystyle x^{\ast }}$ must live on the boundary of ${\displaystyle P}$. If ${\displaystyle x^{\ast }}$ is not a vertex itself, it must be the convex combination of vertices of ${\displaystyle P}$, say ${\displaystyle x_{1},...,x_{t}}$. Then ${\displaystyle x^{\ast }=\sum _{i=1}^{t}\lambda _{i}x_{i}}$ with ${\displaystyle \lambda _{i}\geq 0}$ and ${\displaystyle \sum _{i=1}^{t}\lambda _{i}=1}$. Observe that

${\displaystyle 0=c^{T}\left(\left(\sum _{i=1}^{t}\lambda _{i}x_{i}\right)-x^{\ast }\right)=c^{T}\left(\sum _{i=1}^{t}\lambda _{i}(x_{i}-x^{\ast })\right)=\sum _{i=1}^{t}\lambda _{i}(c^{T}x_{i}-c^{T}x^{\ast }).}$

Since ${\displaystyle x^{\ast }}$ is an optimal solution, all terms in the sum are nonnegative. Since the sum is equal to zero, we must have that each individual term is equal to zero. Hence, ${\displaystyle c^{T}x^{\ast }=c^{T}x_{i}}$ for each ${\displaystyle x_{i}}$, so every ${\displaystyle x_{i}}$ is also optimal, and therefore all points on the face whose vertices are ${\displaystyle x_{1},...,x_{t}}$, are optimal solutions.