How to break data into 3 sets (train, validation and test)?

I have a pandas dataframe and I want to split it into 3 separate sets. I know that using train_test_split from sklearn.cross_validation , you can split the data into two sets (train and test). However, I could not find a solution to split the data into three sets. It is desirable that I would like to have indexes of the source data.

I know that a workaround would be to use train_test_split twice and somehow tune the indexes. But is there a more standard / built-in way to split data into 3 sets instead of 2?

+110
numpy pandas scikit-learn machine-learning dataframe
Jul 07 '16 at 16:26
source share
5 answers

Numpy solution. We will divide our data set into the following parts:

  • 60% - train set,
  • 20% - test set,
  • 20% - test kit



 In [305]: train, validate, test = np.split(df.sample(frac=1), [int(.6*len(df)), int(.8*len(df))]) In [306]: train Out[306]: ABCDE 0 0.046919 0.792216 0.206294 0.440346 0.038960 2 0.301010 0.625697 0.604724 0.936968 0.870064 1 0.642237 0.690403 0.813658 0.525379 0.396053 9 0.488484 0.389640 0.599637 0.122919 0.106505 8 0.842717 0.793315 0.554084 0.100361 0.367465 7 0.185214 0.603661 0.217677 0.281780 0.938540 In [307]: validate Out[307]: ABCDE 5 0.806176 0.008896 0.362878 0.058903 0.026328 6 0.145777 0.485765 0.589272 0.806329 0.703479 In [308]: test Out[308]: ABCDE 4 0.521640 0.332210 0.370177 0.859169 0.401087 3 0.333348 0.964011 0.083498 0.670386 0.169619 

[int(.6*len(df)), int(.8*len(df))] is the indices_or_sections array for numpy.split () .

Here is a small demonstration for using np.split() - let me split an array of 20 elements into the following parts: 80%, 10%, 10%:

 In [45]: a = np.arange(1, 21) In [46]: a Out[46]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]) In [47]: np.split(a, [int(.8 * len(a)), int(.9 * len(a))]) Out[47]: [array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]), array([17, 18]), array([19, 20])] 
+120
Jul 07 '16 at 16:56
source share

Remarks:

The function was written to handle seeding of randomized set creation. You should not rely on a given splitting that does not randomize sets.

 import numpy as np import pandas as pd def train_validate_test_split(df, train_percent=.6, validate_percent=.2, seed=None): np.random.seed(seed) perm = np.random.permutation(df.index) m = len(df.index) train_end = int(train_percent * m) validate_end = int(validate_percent * m) + train_end train = df.ix[perm[:train_end]] validate = df.ix[perm[train_end:validate_end]] test = df.ix[perm[validate_end:]] return train, validate, test 

demonstration

 np.random.seed([3,1415]) df = pd.DataFrame(np.random.rand(10, 5), columns=list('ABCDE')) df 

enter image description here

 train, validate, test = train_validate_test_split(df) train 

enter image description here

 validate 

enter image description here

 test 

enter image description here

+44
Jul 07 '16 at 16:47
source share

However, one approach to splitting the data set into train , test , cv with 0.6 , 0.2 , 0.2 is to train_test_split use the train_test_split method train_test_split .

 from sklearn.model_selection import train_test_split x, x_test, y, y_test = train_test_split(xtrain,labels,test_size=0.2,train_size=0.8) x_train, x_cv, y_train, y_cv = train_test_split(x,y,test_size = 0.25,train_size =0.75) 
+29
Mar 21 '17 at 16:10
source share

One approach is to use the train_test_split function twice.

 from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1) 
+8
Apr 30 '18 at 10:49
source share

It is very convenient to use train_test_split without reindexing after splitting into several sets and without writing additional code. The best answer above does not mention that splitting two times using train_test_split without resizing the partitions will not give the originally intended partition:

 x_train, x_remain = train_test_split(x, test_size=(val_size + test_size)) 

Then part of the test and test suites in x_remain will change and can be calculated as

 new_test_size = np.around(test_size / (val_size + test_size), 2) # To preserve (new_test_size + new_val_size) = 1.0 new_val_size = 1.0 - new_test_size x_val, x_test = train_test_split(x_remain, test_size=new_test_size) 

In this case, all initial sections are saved.

+1
Nov 16 '18 at 9:35
source share



All Articles