Theoretical Issues

Multiple classifier systems (MCSs) are the methodological core around which PRA Lab's research activities evolved since its foundation, in 1996. PRA Lab co-organizes the International Workshop on Multiple Classifier Systems, since the first edition in 2000. The MCS workshop is a well-established series of meetings providing the leading international forum for the discussion of issues in multiple classifier systems and ensemble methods. It brings together researchers from diverse communities concerned with this topic, including pattern recognition, machine learning, neural network, data mining and statistics.
 
Official logo of the
International Workshop on MCS
 

MCSs are a state-of-the-art approach for classifier design. The traditional approach is based on the evaluation of different, alternative classification algorithms for a given problem, and on the choice of the most accurate one. This approach exhibits some drawbacks:

  • In many practical applications the available labelled data are not sufficient to reliably estimating the best individual classifier, and one even risks to select the worst one. Combining all (or a subset of) the classifiers at hand can prevent the choice of the worst classifier, and can even produce a better classifier than each individual one.
  • Even if the available classifiers exhibit the same accuracy, they may misclassify different samples. In this case, combining them can improve their accuracy.
  • In some applications, different kinds of information sources (features) are available.  For instance, different biometric traits can be exploited for biometric identity recognition, like face, voice and fingerprints. Designing a single, "monolithic" classifier based on all the features at hand is a complex task, and resulting accuracy can be unsatisfactory. MCSs provide an advantageous solution also in this scenario, i.e., designing a distinct classifier for each kind of feature, and combining their outputs through a suitable fusion rule.
  • It is know that the accuracy of a given learning algorithm, on a given classification problem, can be improved by combining different classifiers obtained by running that algorithm on different versions of the available training set. For instance, one can randomly resample the training set with replacement (bagging), or sequentially construct the individual classifiers, forcing each of them to focus on training samples misclassified by the previous classifiers (boosting).

PRA Lab gave several contributions to MCSs theory and methods:

  • We developed MCS design methods based on the overproduce and choose paradigm. They consist of creating first a large set of different classifiers, and then selecting the "best" subset of classifiers to be combined, in terms of the trade-off between individual classifier accuracy and diversity.
  • We developed dynamic classifier selection techniques, that aim at selecting at classification phase the individual classifier that is most likely to correctly classify a given sample.
  • We gave a theoretical analysis of one of the most widely used combining rules, the linear combination of classifiers outputs, focusing on the comparison between simple averaging (that gives identical weights to all classifiers) and weighted averaging (which is potentially more accurate, but requires training data for reliable weight estimation). We also derived practical guidelines for choosing between them, and for choosing the size of linearly combined ensembles constructed using randomisation-based techniques like Bagging.

We investigated the capability of MCSs of improving security in adversarial learning applications.

We also used the MCS paradigm in several application fields.