On Boosting with Polynomially Bounded Distributions
Nader H. Bshouty, Dmitry Gavinsky;
We construct a framework which allows an
algorithm to turn the distributions produced by some boosting
algorithms into polynomially smooth distributions (w.r.t. the PAC
with minimal performance loss.
Further, we explore the case of Freund and Schapire's AdaBoost
bounding its distributions to polynomially smooth. The main advantage
over other boosting techniques is that it is adaptive, i.e.,
it is able to take advantage of weak hypotheses that are more accurate
than it was assumed a priori. We show that the feature of
adaptiveness is preserved and improved by our technique.
Our scheme allows the execution of AdaBoost
in the on-line boosting mode (i.e.,
to perform boosting "by filtering"). Executed this way (and
possessing the quality of smoothness), now AdaBoost
may be efficiently
applied to a wider range of learning problems than before.
In particular, we demonstrate AdaBoost
's application to the task of
DNF learning using membership queries
. This application results
in an algorithm that chooses the number of boosting iterations
adaptively, and, consequently, adaptively chooses the size of the
produced final hypothesis. This answers affirmatively a question
posed by Jackson.