Finally, we tried to evaluate how *SVMTorch* (with shrinking and in
non-sparse format) and *Nodelib* scaled
with respect to the size of the training set. In order not to be influenced by
the implementation of the cache system, we computed the training time
for training sets of sizes 500, 1000, 2000, 3000, 4000, and 5000, so
that the whole matrix of the quadratic problem could be kept in memory.
Given the results, we did a linear regression of the log of the time
given the log of the training size and TABLE 6
gives the slope of this linear regression for each problem,
which gives an idea of how *SVMTorch* scales: it appears to be
slightly better than quadratic, and slightly better than *Nodelib*.