如何将数据分成3组(训练、验证和测试)?

我有一个 pandas dataframe,我想把它分为3个单独的集合。我知道使用sklearn.cross_validation中的train_test_split,可以将数据分为两组(训练和测试)。然而,我无法找到将数据分成三组的任何解决方案。最好是有原始数据的下标。

我知道一个解决办法是使用train_test_split两次,并以某种方式调整索引。但是是否有一种更标准/内置的方法将数据分成3组而不是2组?

194866 次浏览

注意:

函数被编写来处理随机集创建的播种。你不应该依赖集分割,它不会随机化集合。

import numpy as np
import pandas as pd


def train_validate_test_split(df, train_percent=.6, validate_percent=.2, seed=None):
np.random.seed(seed)
perm = np.random.permutation(df.index)
m = len(df.index)
train_end = int(train_percent * m)
validate_end = int(validate_percent * m) + train_end
train = df.iloc[perm[:train_end]]
validate = df.iloc[perm[train_end:validate_end]]
test = df.iloc[perm[validate_end:]]
return train, validate, test

示范

np.random.seed([3,1415])
df = pd.DataFrame(np.random.rand(10, 5), columns=list('ABCDE'))
df

enter image description here

train, validate, test = train_validate_test_split(df)


train

enter image description here

validate

enter image description here

test

enter image description here

Numpy解决方案。我们将首先洗牌整个数据集(df.sample(frac=1, random_state=42)),然后将数据集分成以下部分:

  • 60% -列车集,
  • 20% -验证集,
  • 20% -测试装置

In [305]: train, validate, test = \
np.split(df.sample(frac=1, random_state=42),
[int(.6*len(df)), int(.8*len(df))])


In [306]: train
Out[306]:
A         B         C         D         E
0  0.046919  0.792216  0.206294  0.440346  0.038960
2  0.301010  0.625697  0.604724  0.936968  0.870064
1  0.642237  0.690403  0.813658  0.525379  0.396053
9  0.488484  0.389640  0.599637  0.122919  0.106505
8  0.842717  0.793315  0.554084  0.100361  0.367465
7  0.185214  0.603661  0.217677  0.281780  0.938540


In [307]: validate
Out[307]:
A         B         C         D         E
5  0.806176  0.008896  0.362878  0.058903  0.026328
6  0.145777  0.485765  0.589272  0.806329  0.703479


In [308]: test
Out[308]:
A         B         C         D         E
4  0.521640  0.332210  0.370177  0.859169  0.401087
3  0.333348  0.964011  0.083498  0.670386  0.169619

[int(.6*len(df)), int(.8*len(df))] -是numpy.split ()indices_or_sections 数组。

下面是一个使用np.split()的小演示-让我们将20个元素的数组分成以下部分:80%,10%,10%:

In [45]: a = np.arange(1, 21)


In [46]: a
Out[46]: array([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20])


In [47]: np.split(a, [int(.8 * len(a)), int(.9 * len(a))])
Out[47]:
[array([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16]),
array([17, 18]),
array([19, 20])]

然而,将数据集分为traintestcv0.60.20.2的一种方法是使用train_test_split方法两次。

from sklearn.model_selection import train_test_split


x, x_test, y, y_test = train_test_split(xtrain,labels,test_size=0.2,train_size=0.8)
x_train, x_cv, y_train, y_cv = train_test_split(x,y,test_size = 0.25,train_size =0.75)

使用train_test_split非常方便,不需要在划分为几个集后执行重新索引,也不需要编写一些额外的代码。上面的最佳答案没有提到使用train_test_split分隔两次而不改变分区大小将不会给出最初的预期分区:

x_train, x_remain = train_test_split(x, test_size=(val_size + test_size))

那么x_remain中验证集和测试集的部分发生了变化和可以算作

new_test_size = np.around(test_size / (val_size + test_size), 2)
# To preserve (new_test_size + new_val_size) = 1.0
new_val_size = 1.0 - new_test_size


x_val, x_test = train_test_split(x_remain, test_size=new_test_size)

在这种情况下,将保存所有初始分区。

下面是一个Python函数,它将Pandas数据帧划分为分层抽样的训练、验证和测试数据帧。它通过两次调用scikit-learn的函数train_test_split()来执行这种拆分。

import pandas as pd
from sklearn.model_selection import train_test_split


def split_stratified_into_train_val_test(df_input, stratify_colname='y',
frac_train=0.6, frac_val=0.15, frac_test=0.25,
random_state=None):
'''
Splits a Pandas dataframe into three subsets (train, val, and test)
following fractional ratios provided by the user, where each subset is
stratified by the values in a specific column (that is, each subset has
the same relative frequency of the values in the column). It performs this
splitting by running train_test_split() twice.


Parameters
----------
df_input : Pandas dataframe
Input dataframe to be split.
stratify_colname : str
The name of the column that will be used for stratification. Usually
this column would be for the label.
frac_train : float
frac_val   : float
frac_test  : float
The ratios with which the dataframe will be split into train, val, and
test data. The values should be expressed as float fractions and should
sum to 1.0.
random_state : int, None, or RandomStateInstance
Value to be passed to train_test_split().


Returns
-------
df_train, df_val, df_test :
Dataframes containing the three splits.
'''


if frac_train + frac_val + frac_test != 1.0:
raise ValueError('fractions %f, %f, %f do not add up to 1.0' % \
(frac_train, frac_val, frac_test))


if stratify_colname not in df_input.columns:
raise ValueError('%s is not a column in the dataframe' % (stratify_colname))


X = df_input # Contains all columns.
y = df_input[[stratify_colname]] # Dataframe of just the column on which to stratify.


# Split original dataframe into train and temp dataframes.
df_train, df_temp, y_train, y_temp = train_test_split(X,
y,
stratify=y,
test_size=(1.0 - frac_train),
random_state=random_state)


# Split the temp dataframe into val and test dataframes.
relative_frac_test = frac_test / (frac_val + frac_test)
df_val, df_test, y_val, y_test = train_test_split(df_temp,
y_temp,
stratify=y_temp,
test_size=relative_frac_test,
random_state=random_state)


assert len(df_input) == len(df_train) + len(df_val) + len(df_test)


return df_train, df_val, df_test

下面是一个完整的工作示例。

考虑一个数据集,该数据集具有一个标签,您希望在其上执行分层。这个标签在原始数据集中有自己的分布,比如75% foo, 15% bar和10% baz。现在,让我们使用60/20/20的比例将数据集分割为训练、验证和测试子集,其中每个分割保留相同的标签分布。如下图所示:

enter image description here

下面是示例数据集:

df = pd.DataFrame( { 'A': list(range(0, 100)),
'B': list(range(100, 0, -1)),
'label': ['foo'] * 75 + ['bar'] * 15 + ['baz'] * 10 } )


df.head()
#    A    B label
# 0  0  100   foo
# 1  1   99   foo
# 2  2   98   foo
# 3  3   97   foo
# 4  4   96   foo


df.shape
# (100, 3)


df.label.value_counts()
# foo    75
# bar    15
# baz    10
# Name: label, dtype: int64

现在,让我们从上面调用split_stratified_into_train_val_test()函数,以按照60/20/20的比例获取训练、验证和测试数据帧。

df_train, df_val, df_test = \
split_stratified_into_train_val_test(df, stratify_colname='label', frac_train=0.60, frac_val=0.20, frac_test=0.20)

三个数据帧df_traindf_valdf_test包含所有原始行,但它们的大小将遵循上面的比例。

df_train.shape
#(60, 3)


df_val.shape
#(20, 3)


df_test.shape
#(20, 3)

此外,三次分割中的每一次都将具有相同的标签分布,即75% foo, 15% bar和10% baz

df_train.label.value_counts()
# foo    45
# bar     9
# baz     6
# Name: label, dtype: int64


df_val.label.value_counts()
# foo    15
# bar     3
# baz     2
# Name: label, dtype: int64


df_test.label.value_counts()
# foo    15
# bar     3
# baz     2
# Name: label, dtype: int64

在监督学习的情况下,你可能想拆分X和y(其中X是输入,y是基本真理输出)。 你只需要在拆分之前注意以同样的方式洗牌X和y

在这里,X和y在同一个数据帧中,所以我们对它们进行洗牌,将它们分开,并对每个数据帧应用拆分(就像在选择的答案中一样),或者X和y在两个不同的数据帧中,所以我们洗牌X,将y按洗牌X的方式重新排序,并对每个数据帧应用拆分。

# 1st case: df contains X and y (where y is the "target" column of df)
df_shuffled = df.sample(frac=1)
X_shuffled = df_shuffled.drop("target", axis = 1)
y_shuffled = df_shuffled["target"]


# 2nd case: X and y are two separated dataframes
X_shuffled = X.sample(frac=1)
y_shuffled = y[X_shuffled.index]


# We do the split as in the chosen answer
X_train, X_validation, X_test = np.split(X_shuffled, [int(0.6*len(X)),int(0.8*len(X))])
y_train, y_validation, y_test = np.split(y_shuffled, [int(0.6*len(X)),int(0.8*len(X))])
def train_val_test_split(X, y, train_size, val_size, test_size):
X_train_val, X_test, y_train_val, y_test = train_test_split(X, y, test_size = test_size)
relative_train_size = train_size / (val_size + train_size)
X_train, X_val, y_train, y_val = train_test_split(X_train_val, y_train_val,
train_size = relative_train_size, test_size = 1-relative_train_size)
return X_train, X_val, X_test, y_train, y_val, y_test

这里我们用sklearn的train_test_split分割数据2次

考虑到df id你的原始数据框架:

1 -首先你在训练和测试之间分割数据(10%):

my_test_size = 0.10


X_train_, X_test, y_train_, y_test = train_test_split(
df.index.values,
df.label.values,
test_size=my_test_size,
random_state=42,
stratify=df.label.values,
)

2 -然后你在训练和验证之间分割训练集(20%):

my_val_size = 0.20


X_train, X_val, y_train, y_val = train_test_split(
df.loc[X_train_].index.values,
df.loc[X_train_].label.values,
test_size=my_val_size,
random_state=42,
stratify=df.loc[X_train_].label.values,
)

3 -然后,根据上述步骤中生成的索引对原始数据帧进行切片:

# data_type is not necessary.
df['data_type'] = ['not_set']*df.shape[0]
df.loc[X_train, 'data_type'] = 'train'
df.loc[X_val, 'data_type'] = 'val'
df.loc[X_test, 'data_type'] = 'test'

结果是这样的:

enter image description here

注意:此解决方案使用问题中提到的解决方案。

将数据集分割为训练集和测试集,如在其他答案中一样,使用

from sklearn.model_selection import train_test_split


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

然后,如果你适合你的模型,你可以添加validation_split作为参数。这样就不需要提前创建验证集。例如:

from tensorflow.keras import Model


model = Model(input_layer, out)


[...]


history = model.fit(x=X_train, y=y_train, [...], validation_split = 0.3)

验证集旨在作为训练集训练期间的代表运行测试集,完全来自训练集,无论是通过k-fold交叉验证(推荐)还是validation_split;然后,您不需要单独创建一个验证集,仍然可以将数据集分为您所要求的三个集。

回答任意数量的子集:

def _separate_dataset(patches, label_patches, percentage, shuffle: bool = True):
"""
:param patches: data patches
:param label_patches: label patches
:param percentage: list of percentages for each value, example [0.9, 0.02, 0.08] to get 90% train, 2% val and 8% test.
:param shuffle: Shuffle dataset before split.
:return: tuple of two lists of size = len(percentage), one with data x and other with labels y.
"""
x_test = patches
y_test = label_patches
percentage = list(percentage)       # need it to be mutable
assert sum(percentage) == 1., f"percentage must add to 1, but it adds to sum{percentage} = {sum(percentage)}"
x = []
y = []
for i, per in enumerate(percentage[:-1]):
x_train, x_test, y_train, y_test = train_test_split(x_test, y_test, test_size=1-per, shuffle=shuffle)
percentage[i+1:] = [value / (1-percentage[i]) for value in percentage[i+1:]]
x.append(x_train)
y.append(y_train)
x.append(x_test)
y.append(y_test)
return x, y

这适用于任何比例。在你的情况下,你应该执行percentage = [train_percentage, val_percentage, test_percentage]

我能想到的最简单的方法是将分割分数映射到数组下标,如下所示:

train_set = data[:int((len(data)+1)*train_fraction)]
test_set = data[int((len(data)+1)*train_fraction):int((len(data)+1)*(train_fraction+test_fraction))]
val_set = data[int((len(data)+1)*(train_fraction+test_fraction)):]

data = random.shuffle(data)