我有一个熊猫数据框架,我想把它分为3个单独的集。我知道使用sklearn中的train_test_split。交叉验证,可以将数据分为两组(训练和测试)。然而,我无法找到将数据分成三组的任何解决方案。最好是有原始数据的下标。

我知道一个解决办法是使用train_test_split两次,并以某种方式调整索引。但是是否有一种更标准/内置的方法将数据分成3组而不是2组?


当前回答

注意:

函数被编写来处理随机集创建的播种。你不应该依赖集分割,它不会随机化集合。

import numpy as np
import pandas as pd

def train_validate_test_split(df, train_percent=.6, validate_percent=.2, seed=None):
    np.random.seed(seed)
    perm = np.random.permutation(df.index)
    m = len(df.index)
    train_end = int(train_percent * m)
    validate_end = int(validate_percent * m) + train_end
    train = df.iloc[perm[:train_end]]
    validate = df.iloc[perm[train_end:validate_end]]
    test = df.iloc[perm[validate_end:]]
    return train, validate, test

示范

np.random.seed([3,1415])
df = pd.DataFrame(np.random.rand(10, 5), columns=list('ABCDE'))
df

train, validate, test = train_validate_test_split(df)

train

validate

test

其他回答

我能想到的最简单的方法是将分割分数映射到数组下标,如下所示:

train_set = data[:int((len(data)+1)*train_fraction)]
test_set = data[int((len(data)+1)*train_fraction):int((len(data)+1)*(train_fraction+test_fraction))]
val_set = data[int((len(data)+1)*(train_fraction+test_fraction)):]

其中data = random.shuffle(data)

考虑到df id你的原始数据帧:

1 -首先你在训练和测试之间分割数据(10%):

my_test_size = 0.10

X_train_, X_test, y_train_, y_test = train_test_split(
    df.index.values,
    df.label.values,
    test_size=my_test_size,
    random_state=42,
    stratify=df.label.values,    
)

2 -然后你在训练和验证之间分割训练集(20%):

my_val_size = 0.20

X_train, X_val, y_train, y_val = train_test_split(
    df.loc[X_train_].index.values,
    df.loc[X_train_].label.values,
    test_size=my_val_size,
    random_state=42,
    stratify=df.loc[X_train_].label.values,  
)

3 -然后,根据上述步骤中生成的索引对原始数据帧进行切片:

# data_type is not necessary. 
df['data_type'] = ['not_set']*df.shape[0]
df.loc[X_train, 'data_type'] = 'train'
df.loc[X_val, 'data_type'] = 'val'
df.loc[X_test, 'data_type'] = 'test'

结果是这样的:

注意:此解决方案使用问题中提到的解决方案。

然而,将数据集分为train、test、cv(0.6、0.2、0.2)的一种方法是使用train_test_split方法两次。

from sklearn.model_selection import train_test_split

x, x_test, y, y_test = train_test_split(xtrain,labels,test_size=0.2,train_size=0.8)
x_train, x_cv, y_train, y_cv = train_test_split(x,y,test_size = 0.25,train_size =0.75)
def train_val_test_split(X, y, train_size, val_size, test_size):
    X_train_val, X_test, y_train_val, y_test = train_test_split(X, y, test_size = test_size)
    relative_train_size = train_size / (val_size + train_size)
    X_train, X_val, y_train, y_val = train_test_split(X_train_val, y_train_val,
                                                      train_size = relative_train_size, test_size = 1-relative_train_size)
    return X_train, X_val, X_test, y_train, y_val, y_test

在这里,我们使用sklearn的train_test_split将数据分割2次

使用train_test_split非常方便,不需要在划分到几个集后执行重新索引,也不需要编写一些额外的代码。上面的最佳答案没有提到使用train_test_split分隔两次而不改变分区大小将不会给出最初预期的分区:

x_train, x_remain = train_test_split(x, test_size=(val_size + test_size))

那么x_remain中的验证集和测试集的部分就会发生变化,可以算作

new_test_size = np.around(test_size / (val_size + test_size), 2)
# To preserve (new_test_size + new_val_size) = 1.0 
new_val_size = 1.0 - new_test_size

x_val, x_test = train_test_split(x_remain, test_size=new_test_size)

在这种情况下,将保存所有初始分区。