## Efficient Margin Maximizing with Boosting

** Gunnar Rätsch, Manfred K. Warmuth**; 6(71):2131−2152, 2005.

### Abstract

AdaBoost produces a linear combination of base hypotheses and predicts with the sign of this linear combination. The linear combination may be viewed as a hyperplane in feature space where the base hypotheses form the features. It has been observed that the generalization error of the algorithm continues to improve even after all examples are on the correct side of the current hyperplane. The improvement is attributed to the experimental observation that the distances (margins) of the examples to the separating hyperplane are increasing even after all examples are on the correct side.

We introduce a new version of AdaBoost, called
AdaBoost*_{ν}, that
explicitly maximizes the minimum margin of the examples up to a given
precision. The algorithm incorporates a current estimate of the
achievable margin into its calculation of the linear coefficients of
the base hypotheses. The bound on the number of iterations needed by
the new algorithms is the same as the number needed by a known version
of AdaBoost that must have an explicit estimate of the achievable
margin as a parameter. We also illustrate experimentally
that our algorithm requires considerably fewer iterations
than other algorithms that aim to maximize the margin.

© JMLR 2005. (edit, beta) |