如何添加和删除新的层角落后加载权重?

我正在尝试做一个迁移学习,为此目的,我想删除神经网络的最后两层,并添加另外两层。这是一个示例代码,它也输出相同的错误。

from keras.models import Sequential
from keras.layers import Input,Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.core import Dropout, Activation
from keras.layers.pooling import GlobalAveragePooling2D
from keras.models import Model


in_img = Input(shape=(3, 32, 32))
x = Convolution2D(12, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(in_img)
x = Activation('relu', name='relu_conv1')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
x = Convolution2D(3, 1, 1, border_mode='valid', name='conv2')(x)
x = Activation('relu', name='relu_conv2')(x)
x = GlobalAveragePooling2D()(x)
o = Activation('softmax', name='loss')(x)
model = Model(input=in_img, output=[o])
model.compile(loss="categorical_crossentropy", optimizer="adam")
#model.load_weights('model_weights.h5', by_name=True)
model.summary()


model.layers.pop()
model.layers.pop()
model.summary()
model.add(MaxPooling2D())
model.add(Activation('sigmoid', name='loss'))

我删除了层使用 pop(),但当我试图添加其输出这个错误

AttributeError: 'Model' object has no attribute 'add'

我知道这个错误最可能的原因是 model.add()的使用不当。我还应该使用什么语法?

编辑:

我试图删除/添加角质层,但它不允许它被添加后加载外部权重。

from keras.models import Sequential
from keras.layers import Input,Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.core import Dropout, Activation
from keras.layers.pooling import GlobalAveragePooling2D
from keras.models import Model
in_img = Input(shape=(3, 32, 32))


def gen_model():
in_img = Input(shape=(3, 32, 32))
x = Convolution2D(12, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(in_img)
x = Activation('relu', name='relu_conv1')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
x = Convolution2D(3, 1, 1, border_mode='valid', name='conv2')(x)
x = Activation('relu', name='relu_conv2')(x)
x = GlobalAveragePooling2D()(x)
o = Activation('softmax', name='loss')(x)
model = Model(input=in_img, output=[o])
return model


#parent model
model=gen_model()
model.compile(loss="categorical_crossentropy", optimizer="adam")
model.summary()


#saving model weights
model.save('model_weights.h5')


#loading weights to second model
model2=gen_model()
model2.compile(loss="categorical_crossentropy", optimizer="adam")
model2.load_weights('model_weights.h5', by_name=True)


model2.layers.pop()
model2.layers.pop()
model2.summary()


#editing layers in the second model and saving as third model
x = MaxPooling2D()(model2.layers[-1].output)
o = Activation('sigmoid', name='loss')(x)
model3 = Model(input=in_img, output=[o])

它显示了这个错误

RuntimeError: Graph disconnected: cannot obtain value for tensor input_4 at layer "input_4". The following previous layers were accessed without issue: []
100227 次浏览

You can take the output of the last model and create a new model. The lower layers remains the same.

model.summary()
model.layers.pop()
model.layers.pop()
model.summary()


x = MaxPooling2D()(model.layers[-1].output)
o = Activation('sigmoid', name='loss')(x)


model2 = Model(input=in_img, output=[o])
model2.summary()

Check How to use models from keras.applications for transfer learnig?

Update on Edit:

The new error is because you are trying to create the new model on global in_img which is actually not used in the previous model creation.. there you are actually defining a local in_img. So the global in_img is obviously not connected to the upper layers in the symbolic graph. And it has nothing to do with loading weights.

To better resolve this problem you should instead use model.input to reference to the input.

model3 = Model(input=model2.input, output=[o])

Another way to do it

from keras.models import Model


layer_name = 'relu_conv2'
model2= Model(inputs=model1.input, outputs=model1.get_layer(layer_name).output)

As of Keras 2.3.1 and TensorFlow 2.0, model.layers.pop() is not working as intended (see issue here). They suggested two options to do this.

One option is to recreate the model and copy the layers. For instance, if you want to remove the last layer and add another one, you can do:

model = Sequential()
for layer in source_model.layers[:-1]: # go through until last layer
model.add(layer)
model.add(Dense(3, activation='softmax'))
model.summary()
model.compile(optimizer='adam', loss='categorical_crossentropy')

Another option is to use the functional model:

predictions = Dense(3, activation='softmax')(source_model.layers[-2].output)
model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='adam', loss='categorical_crossentropy')

model.layers[-1].output means the last layer's output which is the final output, so in your code, you actually didn't remove any layers, you added another head/path.

An alternative to Wesam Na's answer, if you don't know the layer names you can simply cut off the last layer via:

from keras.models import Model


model2= Model(inputs=model1.input, outputs=model1.layers[-2].output)