My program isn't finished yet, so I can't tell you any results soon but I found this section in the LibSVM tutorial:
Scaling them before applying SVM is very important. (Sarle 1997, Part 2 of Neuralhttp://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf
Networks FAQ) explains why we scale data while using Neural Networks, and most
of considerations also apply to SVM.
The main advantage is to avoid attributes in greater numeric ranges dominate
those in smaller numeric ranges. Another advantage is to avoid numerical diffculties
during the calculation. Because kernel values usually depend on the inner products of
feature vectors, e.g. the linear kernel and the polynomial kernel, large attribute values
might cause numerical problems. We recommend linearly scaling each attribute to
the range [-1; +1] or [0; 1].
Of course we have to use the same method to scale testing data before testing. For
example, suppose that we scaled the rst attribute of training data from [-10; +10]
to [-1; +1]. If the rst attribute of testing data is lying in the range [-11; +8], we
must scale the testing data to [-1:1; +0:8].
So, I will use normalization before applying the LibSVM learner.
Is there something like a "NormalizationModel" to get the same normalized values for trainig and test examples?