Handshaking lemma: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
en>David Eppstein
Undid revision 627312070 by 203.171.244.121 (talk) C5 is not a counterexample; it's r-regular with r=2, an even number
 
Line 1: Line 1:
'''Linear Programming Boosting''' ('''LPBoost''') is a [[supervised classification|supervised classifier]] from the [[Boosting (machine learning)|boosting]] family of classifiers.  LPBoost maximizes a ''margin'' between training samples of different classes and hence also belongs to the class of margin-maximizing supervised classification algorithms.  Consider a classification function
The author's name is Andera and she believes it sounds fairly good. To perform lacross is the thing I love most of all. My working day job is an information officer but I've already applied for another one. Alaska is the only place I've been residing in but now I'm contemplating other options.<br><br>Also visit my web page ... certified psychics ([http://www.aseandate.com/index.php?m=member_profile&p=profile&id=13352970 aseandate.com])
:<math>
f: \mathcal{X} \to \{ -1, 1 \},
</math>
which classifies samples from a space <math>\mathcal{X}</math> into one of two classes, labelled 1 and -1, respectively.  LPBoost is an algorithm to ''learn'' such a classification function given a set of training examples with known class labels. LPBoost is a [[machine learning]] technique and especially suited for applications of joint classification and feature selection in structured domains.
 
== LPBoost overview ==
 
As in all boosting classifiers, the final classification function is of the form
:<math>f(\boldsymbol{x}) = \sum_{j=1}^{J} \alpha_j h_j(\boldsymbol{x}),</math>
where <math>\alpha_j</math> are non-negative weightings for ''weak'' classifiers <math>h_j: \mathcal{X} \to \{-1,1\}</math>.    Each individual weak classifier <math>h_j</math> may be just a little bit better than random, but the resulting linear combination of many weak classifiers can perform very well.
 
LPBoost constructs <math>f</math> by starting with an empty set of weak classifiers.  Iteratively, a single weak classifier to add to the set of considered weak classifiers is selected, added and all the weights <math>\boldsymbol{\alpha}</math> for the current set of weak classifiers are adjusted.  This is repeated until no weak classifiers to add remain.
 
The property that all classifier weights are adjusted in each iteration is known as ''totally-corrective'' property.  Early boosting methods, such as [[AdaBoost]] do not have this property and converge slower.
 
== Linear program ==
 
More generally, let <math>\mathcal{H}=\{h(\cdot;\omega) | \omega \in \Omega\}</math> be the possibly infinite set of weak classifiers, also termed ''hypotheses''.  One way to write down the problem LPBoost solves is as a [[linear program]] with infinitely many variables.
 
The primal linear program of LPBoost, optimizing over the non-negative weight vector <math>\boldsymbol{\alpha}</math>, the non-negative vector <math>\boldsymbol{\xi}</math> of slack variables and the ''margin'' <math>\rho</math> is the following.
 
:<math>\begin{array}{cl}
  \underset{\boldsymbol{\alpha},\boldsymbol{\xi},\rho}{\min} & -\rho + D \sum_{n=1}^{\ell} \xi_n\\
  \textrm{sb.t.} & \sum_{\omega \in \Omega} y_n \alpha_{\omega} h(\boldsymbol{x}_n ; \omega) + \xi_n \geq \rho,\qquad n=1,\dots,\ell,\\
  & \sum_{\omega \in \Omega} \alpha_{\omega} = 1,\\
  & \xi_n \geq 0,\qquad n=1,\dots,\ell,\\
  & \alpha_{\omega} \geq 0,\qquad \omega \in \Omega,\\
  & \rho \in {\mathbb R}.
\end{array}</math>
 
Note the effects of slack variables <math>\boldsymbol{\xi} \geq 0</math>: their one-norm is penalized in the objective function by a constant factor <math>D</math>, which—if small enough—always leads to a primal feasible linear program.
 
Here we adopted the notation of a parameter space <math>\Omega</math>, such that for a choice <math>\omega \in \Omega</math> the weak classifier <math>h(\cdot ; \omega): \mathcal{X} \to \{-1,1\}</math> is uniquely defined.
 
When the above linear program was first written down in early publications about boosting methods it was disregarded as intractable due to the large number of variables <math>\boldsymbol{\alpha}</math>.  Only later it was discovered that such linear programs can indeed be solved efficiently using the classic technique of [[Delayed Column Generation|column generation]].
 
=== Column Generation for LPBoost ===
In a [[linear program]] a ''column'' corresponds to a primal variable.  [[Delayed Column Generation|Column generation]] is a technique to solve large linear programs.  It typically works in a restricted problem, dealing only with a subset of variables.  By generating primal variables iteratively and on-demand, eventually the original unrestricted problem with all variables is recovered.  By cleverly choosing the columns to generate the problem can be solved such that while still guaranteeing the obtained solution to be optimal for the original full problem, only a small fraction of columns has to be created.
 
==== LPBoost dual problem ====
 
Columns in the primal linear program corresponds to rows in the [[dual problem|dual linear program]].  The equivalent dual linear program of LPBoost is the following linear program.
 
:<math>\begin{array}{cl}
\underset{\boldsymbol{\lambda},\gamma}{\max} & \gamma\\
\textrm{sb.t.} & \sum_{n=1}^{\ell} y_n h(\boldsymbol{x}_n ; \omega) \lambda_n + \gamma \leq 0,\qquad \omega \in \Omega,\\
& 0 \leq \lambda_n \leq D,\qquad n=1,\dots,\ell,\\
& \sum_{n=1}^{\ell} \lambda_n = 1,\\
& \gamma \in \mathbb{R}.
\end{array}</math>
 
For [[linear program]]s the optimal value of the primal and [[dual problem]] are equal.  For the above primal and dual problems, the optimal value is equal to the negative 'soft margin'.  The soft margin is the size of the margin separating positive from negative training instances minus positive slack variables that carry penalties for margin-violating samples.  Thus, the soft margin may be positive although not all samples are linearly separated by the classification function.  The latter is called the 'hard margin' or 'realized margin'.
 
==== Convergence criterion ====
 
Consider a subset of the satisfied constraints in the dual problem.  For any finite subset we can solve the linear program and thus satisfy all constraints.  If we could prove that of all the constraints which we did not add to the dual problem no single constraint is violated, we would have proven that solving our restricted problem is equivalent to solving the original problem.  More formally, let <math>\gamma^*</math> be the optimal objective function value for any restricted instance.  Then, we can formulate a search problem for the 'most violated constraint' in the original problem space, namely finding <math>\omega^* \in \Omega</math> as
 
:<math>\omega^* = \underset{\omega \in \Omega}{\textrm{argmax}} \sum_{n=1}^{\ell} y_n h(\boldsymbol{x}_n;\omega) \lambda_n.</math>
 
That is, we search the space <math>\mathcal{H}</math> for a single [[decision stump]] <math>h(\cdot;\omega^*)</math> maximizing the left hand side of the dual constraint. If the constraint cannot be violated by any choice of decision stump, none of the corresponding constraint can be active in the original problem and the restricted problem is equivalent.
 
==== Penalization constant <math>D</math> ====
 
The positive value of penalization constant <math>D</math> has to be found using [[model selection]] techniques.  However, if we choose <math>D=\frac{1}{\ell \nu}</math>, where <math>\ell</math> is the number of training samples and <math>0 < \nu < 1</math>, then the new parameter <math>\nu</math> has the following properties.
 
* <math>\nu</math> is an upper bound on the fraction of training errors; that is, if <math>k</math> denotes the number of misclassified training samples, then <math>\frac{k}{\ell} \leq \nu</math>.
* <math>\nu</math> is a lower bound on the fraction of training samples outside or on the margin.
 
== Algorithm ==
 
* Input:
** Training set <math>X = \{\boldsymbol{x}_1, \dots, \boldsymbol{x}_{\ell}\}</math>, <math>\boldsymbol{x}_i \in \mathcal{X}</math>
** Training labels <math>Y = \{y_1,\dots,y_{\ell}\}</math>, <math>y_i \in \{-1,1\}</math>
** Convergence threshold <math>\theta \geq 0</math>
* Output:
** Classification function <math>f: \mathcal{X} \to \{-1,1\}</math>
# Initialization
## Weights, uniform <math>\lambda_n \leftarrow \frac{1}{\ell},\quad n=1,\dots,\ell</math>
## Edge <math>\gamma \leftarrow 0</math>
## Hypothesis count <math>J \leftarrow 1</math>
# Iterate
## <math>\hat h \leftarrow \underset{\omega \in \Omega}{\textrm{argmax}} \sum_{n=1}^{\ell} y_n h(\boldsymbol{x}_n;\omega) \lambda_n</math>
## if <math>\sum_{n=1}^{\ell} y_n \hat h(\boldsymbol{x}_n) \lambda_n + \gamma \leq \theta</math> then
### break
## <math>h_J \leftarrow \hat h</math>
## <math>J \leftarrow J + 1</math>
## <math>(\boldsymbol{\lambda},\gamma) \leftarrow</math> solution of the LPBoost dual
## <math>\boldsymbol{\alpha} \leftarrow</math> Lagrangian multipliers of solution to LPBoost dual problem
# <math>f(\boldsymbol{x}) := \textrm{sign} \left(\sum_{j=1}^J \alpha_j h_j (\boldsymbol{x})\right)</math>
 
Note that if the convergence threshold is set to <math>\theta = 0</math> the solution obtained is the global optimal solution of the above linear program.  In practice, <math>\theta</math> is set to a small positive value in order obtain a good solution quickly.
 
=== Realized margin ===
 
The actual margin separating the training samples is termed the ''realized margin'' and is defined as
:<math>\rho(\boldsymbol{\alpha}) := \min_{n=1,\dots,\ell} y_n \sum_{\alpha_{\omega} \in \Omega} \alpha_{\omega} h(\boldsymbol{x}_n ; \omega).</math>
 
The realized margin can and will usually be negative in the first iterations.  For a hypothesis space that permits singling out of any single sample, as is commonly the case, the realized margin will eventually converge to some positive value.
 
=== Convergence guarantee ===
 
While the above algorithm is proven to converge, in contrast to other boosting formulations, such as [[AdaBoost]] and [[TotalBoost]], there are no known convergence bounds for LPBoost.  In practise however, LPBoost is known to converge quickly, often faster than other formulations.
 
== Base learners ==
 
LPBoost is an [[ensemble learning]] method and thus does not dictate the choice of base learners, the space of hypotheses <math>\mathcal{H}</math>.  Demiriz et al. showed that under mild assumptions, any base learner can be used.  If the base learners are particularly simple, they are often referred to as ''[[decision stump]]s''.
 
The number of base learners commonly used with Boosting in the literature is large. For example, if <math>\mathcal{X} \subseteq {\mathbb R}^n</math>, a base learner could be a linear soft margin [[support vector machine]]. Or even more simple, a simple stump of the form
 
:<math>h(\boldsymbol{x} ; \omega \in \{1,-1\}, p \in \{1,\dots,n\}, t \in {\mathbb R}) :=
  \left\{\begin{array}{cl} \omega & \textrm{if~} \boldsymbol{x}_p \leq t\\
                            -\omega & \textrm{otherwise}\end{array}\right..</math>
 
The above decision stumps looks only along a single dimension <math>p</math> of the input space and simply thresholds the respective column of the sample using a constant threshold <math>t</math>.  Then, it can decide in either direction, depending on <math>\omega</math> for a positive or negative class.
 
Given weights for the training samples, constructing the optimal decision stump of the above form simply involves searching along all sample columns and determining <math>p</math>, <math>t</math> and <math>\omega</math> in order to optimize the gain function.
 
==References==
* [http://www.boosting.org/papers/upload_27160_mlj.ps.gz Linear Programming Boosting via Column Generation], A. Demiriz and K.P. Bennett and J. Shawe-Taylor. Published 2002 in Kluwer Machine Learning 46, pages 225–254.
 
[[Category:Ensemble learning]]

Latest revision as of 03:53, 28 September 2014

The author's name is Andera and she believes it sounds fairly good. To perform lacross is the thing I love most of all. My working day job is an information officer but I've already applied for another one. Alaska is the only place I've been residing in but now I'm contemplating other options.

Also visit my web page ... certified psychics (aseandate.com)