Leave-One-Out Cross Validation Since we are informed of the correct classification of only N 0 nodes ( N 0 = 15 in our case), in a given round we only use ( N 0 − 1 ) of them as the training set, while leaving one out for cross validation (C-V). At every round, the next correctly classified node is left out and the others serve as the training set; then the trained hypothesis is tested on the left-out node. In this way, by averaging N 0 rounds without overlapping, the errors for both the training set and the cross validation set can be evaluated.