We have mostly ignored the issue of decoding techniques for combining the predictions of the pairwise classifiers into a final prediction for the multi-class problem. The technique we used in this paper (simple voting using the a priori probability of the class as a tie breaker), is quite likely to be suboptimal. First, one could improve the tie breaking by exploring techniques that are commonly used for breaking ties in tournament cross tables in games and sports (such as the Sonneborn-Berger ranking in chess tournaments). A further step ahead would be to weight each vote with a confidence estimate provided by the base classifier, or to allow a classifier only to vote for a class if it has a certain minimum confidence in its prediction. Several studies in various contexts have compared different voting techniques for combining the predictions of the individual classifiers of an ensemble (e.g., Mayoraz and Moreira, 1997; Fürnkranz, 2001a; Allwein et al., 2000). Although the final word on this issue remains to be spoken, it seems to be the case that techniques that include confidence estimates into the computation of the final predictions are preferable (cf. also Schapire and Singer, 1999). Along similar lines, there have been several proposals for combining the class probability estimates of the pairwise classifiers into class probability distributions for the multi-class problems (Hastie and Tibshirani, 1998; Price et al., 1995). More elaborate proposals suggest learning separate classifiers for deciding whether a given example belongs to one of the two classes used to train a certain member of the pairwise ensemble (Moreira and Mayoraz, 1998), or organizing the classifiers into an efficient graph structure that can derive a prediction in at most c-1 steps (Platt et al., 2000).