Contrary to the theoretical analysis in the previous section, where we focussed on the ``friendly'' case of pairing unordered binarization vs. single round robin, our empirical results compare the performance of a double round robin binarization with RIPPER as a base learner against both ordered binarization (RIPPER's default mode) and unordered binarization (RIPPER with the parameters -a unordered). In the case of a linear algorithm complexity, the double round robin should be about two times slower than the unordered binarization and four times slower than the ordered binarization. As discussed in the previous section, RIPPER's super-linear run-time11 might improve the relative performance of round robin learning.
|dataset||R3||vs. unordered||vs. ordered|
|solar flares (c)||6.65||6.03||7.57|
|solar flares (m)||3.98||5.62||7.49|
Table 3 shows the run times in CPU secs. user time (measured on a Sparc Ultra-2 under Sun Solaris) of R3 and its performance ratios against RIPPER in unordered and ordered mode. The reported times are average training times,12i.e., the performance ratios can be interpreted as empirical estimates of the class penalty ratios and . On average, R3 is about two times slower than RIPPER in unordered mode, and about three times slower than RIPPER in default, ordered mode, while RIPPER's ordered mode is about 1.5 times faster than unordered mode (as opposed to the factor 2 we were assuming in the theoretical analysis at the end of Section 5.1). This means that despite the inefficient implementation, the empirical values are fairly close to the estimates we made at the end of the previous section: for a linear algorithm, we expected the double round-robin procedure to be about twice as slow as the unordered approach and about four times as slow than the ordered approach. Apparently, the additional savings--based on the fact that simpler concepts are learned for the pairwise tasks and that RIPPER's run-time is super-linear--make up for the losses due to the sub-optimal implementation as a wrapper. For more expensive base learning algorithms (like support vector machines), the analysis in the previous section lets us expect bigger savings.
Moreover, there are several cases where R3 is even faster than RIPPER in unordered mode and comes close to RIPPER in ordered mode. This is despite the fact that R3 is implemented as a perl-script that communicates to RIPPER by writing the training and test sets of the new tasks to the disk, while unordered and ordered binarization are native in RIPPER's efficient implementation in C. Clearly, a tight integration of round robin binarization into RIPPER's C-code would be more efficient.13