Sklearn cross_val_score gives lower accuracy than manual cross validation

I am working on a text classification problem that I set up like this (I missed the data processing steps to perform the instantiation, but they will create a dataframe called data with columns X and y ):

 import sklearn.model_selection as ms from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.ensemble import RandomForestClassifier sim = Pipeline([('vec', TfidfVectorizer((analyzer="word", ngram_range=(1, 2))), ("rdf", RandomForestClassifier())]) 

Now I'm trying to test this model, training it for 2/3 of the data and taking it for the remaining 1/3, for example:

 train, test = ms.train_test_split(data, test_size = 0.33) sim.fit(train.X, train.y) sim.score(test.X, test.y) # 0.533333333333 

I want to do this three times for three different test suites, but using cross_val_score gives me results that are much lower.

 ms.cross_val_score(sim, data.X, data.y) # [ 0.29264069 0.36729223 0.22977941] 

As far as I know, each of the estimates in this array should be prepared by training on 2/3 of the data and counting the remaining 1/3 using the sim.score method. So why are they all so lower?

+5
source share
1 answer

I solved this problem while writing my question, so here it is:

The default behavior for cross_val_score is to use KFold or StratifiedKFold to define bends. By default, both arguments are shuffle=False , so the folds are not randomly pulled from the data:

 import numpy as np import sklearn.model_selection as ms for i, j in ms.KFold().split(np.arange(9)): print("TRAIN:", i, "TEST:", j) TRAIN: [3 4 5 6 7 8] TEST: [0 1 2] TRAIN: [0 1 2 6 7 8] TEST: [3 4 5] TRAIN: [0 1 2 3 4 5] TEST: [6 7 8] 

My source data was sorted by label, so with this default behavior, I tried to predict many shortcuts that I did not see in the training data. This is even more pronounced if I force KFold (I did the classification, so StratifiedKFold was the default):

 ms.cross_val_score(sim, data.text, data.label, cv = ms.KFold()) # array([ 0.05530776, 0.05709188, 0.025 ]) ms.cross_val_score(sim, data.text, data.label, cv = ms.StratifiedKFold(shuffle = False)) # array([ 0.2978355 , 0.35924933, 0.27205882]) ms.cross_val_score(sim, data.text, data.label, cv = ms.KFold(shuffle = True)) # array([ 0.51561106, 0.50579839, 0.51785714]) ms.cross_val_score(sim, data.text, data.label, cv = ms.StratifiedKFold(shuffle = True)) # array([ 0.52869565, 0.54423592, 0.55626715]) 

Doing things manually gave me higher scores because train_test_split did the same thing as KFold(shuffle = True) .

+4
source

Source: https://habr.com/ru/post/1267299/


All Articles