...meaning that there are in fact 0 examples between the training and testing examples. No big deal, just thought I'd point it out.
Hmm, that's right. Any idea how this behaviour is correctly called in English?
My question is whether I should use 1 as the parameter for horizon, or whether horizon should reflect the forecast period, such that if the prediction is for 10 days, horizon should be set to 9. I can see arguments on either side, so appeal to wiser minds for guidance..
The horizon in the sliding validation actually is independent of the horizon in the windowing used for learning. It just defines the gap between training and testing examples and can be used, for example, if you want to predict the values for the next year based on the data from the last one (so the validation horizon is 365 days). The learner windowing on the other hand could be set to a "1-day-horizon" if you want to predict the next day's value.
Hope that helps. Cheers,