next up previous
Next: Conclusion Up: Experimental Results Previous: Size of the Cache

Scaling with Respect to the Size of the Training Set

Finally, we tried to evaluate how SVMTorch (with shrinking and in non-sparse format) and Nodelib scaled with respect to the size of the training set. In order not to be influenced by the implementation of the cache system, we computed the training time for training sets of sizes 500, 1000, 2000, 3000, 4000, and 5000, so that the whole matrix of the quadratic problem could be kept in memory. Given the results, we did a linear regression of the log of the time given the log of the training size and TABLE 6 gives the slope of this linear regression for each problem, which gives an idea of how SVMTorch scales: it appears to be slightly better than quadratic, and slightly better than Nodelib.


 
Table 6: Scaling of SVMTorch and Nodelib for each dataset. Results give the slope of the linear regression in the log-log domain of time versus training size.
  Kin Artificial Forest Sunspots MNIST
Scale SVMTorch 1.81 1.72 1.82 1.85 1.64
Scale Nodelib 1.83 1.93 2.09 2.44 1.75
 


next up previous
Next: Conclusion Up: Experimental Results Previous: Size of the Cache
Journal of Machine Learning Research