## Optimally-Smooth Adaptive Boosting and Application to Agnostic Learning

**
Dmitry Gavinsky**; 4(May):101-117, 2003.

### Abstract

We describe a new boosting algorithm that is the first such algorithm to be both smooth and adaptive. These two features make possible performance improvements for many learning tasks whose solutions use a boosting technique.

The boosting approach was originally suggested for the standard PAC model; we analyze possible applications of boosting in the context of agnostic learning, which is more realistic than the PAC model. We derive a lower bound for the final error achievable by boosting in the agnostic model and show that our algorithm actually achieves that accuracy (within a constant factor).

We note that the idea of applying boosting in the agnostic model was first suggested by Ben-David, Long and Mansour (2001) and the solution they give is improved in the present paper. The accuracy we achieve is exponentially better with respect to the standard agnostic accuracy parameter β.

We also describe the construction of a boosting "tandem" whose asymptotic
number of iterations is the lowest possible (in both γ and ε
and whose smoothness is optimal in terms of *Õ*(·).
This allows adaptively solving problems whose solution is based on smooth
boosting (like noise tolerant boosting and DNF membership learning), while
preserving the original (non-adaptive) solution's complexity.