克拉斯如何处理多标签分类?

我不确定如何解释 Keras 在以下情况下的默认行为:

我的 Y (基本事实)是使用 scikit-learn 的 MultilabelBinarizer()建立的。

因此,为了给出一个随机的例子,我的 y列的一行被编码为: [0,0,0,1,0,1,0,0,0,0,1].

所以我有11个可以预测的类,并且不止一个可以为真,因此这个问题的多标签特性。这个样本有三个标签。

我像处理非多标签问题(一如既往)一样训练模型,并且没有错误。

from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD


model = Sequential()
model.add(Dense(5000, activation='relu', input_dim=X_train.shape[1]))
model.add(Dropout(0.1))
model.add(Dense(600, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(y_train.shape[1], activation='softmax'))


sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy',])


model.fit(X_train, y_train,epochs=5,batch_size=2000)


score = model.evaluate(X_test, y_test, batch_size=2000)
score

当克拉斯遇到我的 y_train,并看到它是“多”一个热编码,这意味着有一个以上的“一”在每一行的 y_train存在时,它会做什么?基本上,克拉斯是否自动执行多标签分类?在评分标准的解释上有什么不同吗?

41323 次浏览

In short

Don't use softmax.

Use sigmoid for activation of your output layer.

Use binary_crossentropy for loss function.

Use predict for evaluation.

Why

In softmax when increasing score for one label, all others are lowered (it's a probability distribution). You don't want that when you have multiple labels.

Complete Code

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation
from tensorflow.keras.optimizers import SGD


model = Sequential()
model.add(Dense(5000, activation='relu', input_dim=X_train.shape[1]))
model.add(Dropout(0.1))
model.add(Dense(600, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(y_train.shape[1], activation='sigmoid'))


sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy',
optimizer=sgd)


model.fit(X_train, y_train, epochs=5, batch_size=2000)


preds = model.predict(X_test)
preds[preds>=0.5] = 1
preds[preds<0.5] = 0
# score = compare preds and y_test

Answer from Keras Documentation

I am quoting from keras document itself.

They have used output layer as dense layer with sigmoid activation. Means they also treat multi-label classification as multi-binary classification with binary cross entropy loss

Following is model created in Keras documentation

shallow_mlp_model = keras.Sequential( [ layers.Dense(512, activation="relu"), layers.Dense(256, activation="relu"), layers.Dense(lookup.vocabulary_size(), activation="sigmoid"), ] # More on why "sigmoid" has been used here in a moment.

Keras doc link:: https://keras.io/examples/nlp/multi_label_classification/