Dynamics of Variance Reduction in Bagging and Other Techniques

TitleDynamics of Variance Reduction in Bagging and Other Techniques
Publication TypeConference Paper
Year of Publication2005
AuthorsFumera, G, Roli, F, Serrau, A
EditorOza, NC, Polikar, R, Kittler, J, Roli, F
Conference Name6th Int. Workshop on Multiple Classifier Systems (MCS 2005)
Volume3541
Pagination316-325
Date Published13/06/2005
PublisherSpringer
Conference LocationSeaside, CA, USA
Keywordscombining rules, ensemble construction, linear combiners, mcs01, mcs02, Multiple Classifier Systems
Abstract

In this paper the performance of bagging in classificationproblems is theoretically analysed, using a framework developed in worksby Tumer and Ghosh and extended by the authors. A bias-variance decompositionis derived, which relates the expected misclassification probabilityattained by linearly combining classifiers trained on N bootstrapreplicates of a fixed training set to that attained by a single bootstrapreplicate of the same training set. Theoretical results show that the expectedmisclassification probability of bagging has the same bias componentas a single bootstrap replicate, while the variance component isreduced by a factor N. Experimental results show that the performanceof bagging as a function of the number of bootstrap replicates followsquite well our theoretical prediction. It is finally shown that theoreticalresults derived for bagging also apply to other methods for constructingmultiple classifiers based on randomisation, such as the random subspacemethod and tree randomisation.

Citation Key 30