在 Kera 序列模型中验证数据用于什么?

我的问题很简单,什么是验证数据传递给 model?

它是否会影响模型的训练方式(例如,通常使用验证集来选择模型中的超参数,但我认为这里不会发生这种情况) ?

我说的是可以这样传递的验证集:

# Create model
model = Sequential()
# Add layers
model.add(...)


# Train model (use 10% of training set as validation set)
history = model.fit(X_train, Y_train, validation_split=0.1)


# Train model (use validation data as validation set)
history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test))

我稍微研究了一下,发现 keras.models.Sequential.fit调用 keras.models.training.fit,它会创建类似 val_accval_loss(可以从 Callback 访问)的变量。keras.models.training.fit还调用 keras.models.training._fit_loop,它将验证数据添加到 callbacks.validation_data,还调用 keras.models.training._test_loop,它将在模型的 self.test_function上成批地循环验证数据。此函数的结果用于填充日志的值,这些值是可从回调访问的值。

在看到这些之后,我觉得传递给 model.fit的验证集并不用于在训练期间验证任何东西,它唯一的用途是获得关于训练后的模型在每个时代如何执行的反馈,这是一个完全独立的集合。因此,使用相同的验证和测试集是可以的,对吗?

有没有人可以确认,model. fit 中的验证集除了从回调函数中读取之外,是否还有其他目的?

93599 次浏览

If you want to build a solid model you have to follow that specific protocol of splitting your data into three sets: One for training, one for validation and one for final evaluation, which is the test set.

The idea is that you train on your training data and tune your model with the results of metrics (accuracy, loss etc) that you get from your validation set.

Your model doesn't "see" your validation set and isn't in any way trained on it, but you as the architect and master of the hyperparameters tune the model according to this data. Therefore it indirectly influences your model because it directly influences your design decisions. You nudge your model to work well with the validation data and that can possibly bring in a tilt.

Exactly that is the reason you only evaluate your model's final score on data that neither your model nor you yourself has used – and that is the third chunk of data, your test set.

Only this procedure makes sure you get an unaffected view of your models quality and ability to generalize what is has learned on totally unseen data.

This YouTube video explains what a validation set is, why it's helpful, and how to implement a validation set in Keras: Create a validation set in Keras

With a validation set, you're essentially taking a fraction of your samples out of your training set, or creating an entirely new set all together, and holding out the samples in this set from training.

During each epoch, the model will be trained on samples in the training set but will NOT be trained on samples in the validation set. Instead, the model will only be validating on each sample in the validation set.

The purpose of doing this is for you to be able to judge how well your model can generalize. Meaning, how well is your model able to predict on data that it's not seen while being trained.

Having a validation set also provides great insight into whether your model is overfitting or not. This can be interpreted by comparing the acc and loss from your training samples to the val_acc and val_loss from your validation samples. For example, if your acc is high, but your val_acc is lagging way behind, this is good indication that your model is overfitting.

I think an overall discussion on train-set, validation-set and test-set will help:

  • Train-Set: The data-set on which the model is being trained on. This is the only data-set on which the weights are updated during back-propagation.
  • Validation-Set (Development Set): The data-set on which we want our model to perform well. During the training process we tune hyper-parameters such that the model performs well on dev-set (but don't use dev-set for training, it is only used to see the performance such that we can decide on how to change the hyper-parameters and after changing hyper-parameters we continue our training on train-set). Dev-set is only used for tuning hyper-parameters to make the model eligible for working well on unknown data (here dev-set is considered as a representative of unknown data-set as it is not directly used for training and additionally saying the hyper-parameters are like tuning knobs to change the way of training) and no back-propagation occurs on dev-set and hence no direct learning from it.
  • Test-Set: We just use it for unbiased estimation. Like dev-set, no training occurs on test-set. The only difference from validation-set (dev- set) is that we don't even tune the hyper-parameters here and just see how well our model have learnt to generalize. Although, like test-set, dev-set is not directly used for training, but as we repeatedly tune hyper-parameters targeting the dev-set, our model indirectly learns the patterns from dev-set and the dev-set becomes no longer unknown to the model. Hence we need another fresh copy of dev-set which is not even used for hyper parameter tuning, and we call this fresh copy of dev-set as test set. As according to the definition of test-set it should be "unknown" to the model. But if we cannot manage a fresh and unseen test set like this, then sometimes we say the dev-set as the test-set.

Summarizing:

  • Train-Set: Used for training.
  • Validation-Set / Dev-Set: Used for tuning hyper-parameters.
  • Test-Set: Used for unbiased estimation.

Again some practical issues here:

  • For training you may collect data from anywhere. It is okay if your all collected data are not from the same domain where the model will be used. For example if the real domain is the photos taken with smartphone camera, it is not necessary to make data-set with smartphone photos only. You may include data from internet, high-end or low-end cameras or from anywhere.
  • For dev-set and test-set it is necessary to reflect the real domain data where the model will be practically used. Also it should contain all possible cases for better estimation.
  • Dev-set and test-set need not to be that large. Just ensure that it is almost covering all cases or situations that may occur in real data. After ensuring it try to give as much data as you can to build train-set.

So Basically in the validation set, the model will try to predict but it won't update its weights (which means that it won't learn from them) so you will get a clear idea of how well your model can find patterns in the training data and apply it to new data.