在一年多前筆者寫過一篇keras使用入門指南:超快速!10分鐘入門Keras指南。憶往事悠悠,某日打開自己博客發現文章閱讀量快3W了。恰好這大半年的research也用到了keras高級點的特性,尋思著是時候寫一篇keras使用進階的文章來記錄下自己「摸魚」的日子.
用keras訓練多標籤數據通常用keras做分類任務,一張圖像往往只對應著一種類別,但是在實際的問題中,可能你需要預測出一張圖像的多種屬性。例如在pyimagesearch的《multi-label-classification-with-keras》這篇文章中提出了一個衣服數據集,整個數據集有兩種屬性,一種是顏色(blue, red, black),另一種是衣服的類型(dress, jeans, shirt) 。如假設one-hot-vector編碼順序是(blue, red, black, dress, jeans, shirt)則black jeans的 label就是[0,0,1,0,1,0]。
那麼面對這樣的多標籤任務如何使用keras進行CNN模型的搭建與訓練呢?
首先我們搭建一個單輸入(一張圖像)多輸出(圖像的多個屬性,比如衣服的顏色,類型)的CNN。
def GenModel(img_width = 512, img_height = 512 , model_name = 'AlexNet'): image_input_shape = (img_width, img_height, 3) if model_name == 'AlexNet': print('\n---Start build model ', model_name, '---\n') image_input = Input(shape=image_input_shape, name='image_input')
conv_image = Conv2D(96, (11, 11), strides = (4, 4), padding = 'valid', activation = 'relu')(image_input) conv_image = MaxPooling2D(pool_size = (3, 3), strides = (2, 2))(conv_image)
conv_image = Conv2D(256, (5, 5), strides = (1, 1), padding = 'same', activation = 'relu')(conv_image) conv_image = MaxPooling2D(pool_size = (3, 3), strides = (2, 2))(conv_image)
conv_image = Conv2D(384, (3,3), strides = (1, 1), padding = 'same', activation = 'relu')(conv_image) conv_image = Conv2D(384, (3,3), strides = (1, 1), padding = 'same', activation = 'relu')(conv_image)
conv_image = Conv2D(384, (3,3), strides = (1, 1), padding = 'same', activation = 'relu')(conv_image) conv_image = MaxPooling2D(pool_size = (3, 3), strides = (2, 2))(conv_image)
conv_image = Flatten()(conv_image)
out_color = Dense(4096, activation='relu',)(conv_image) out_color = Dense(512, activation='relu',)(out_color) out_color = Dense(3, activation='sigmoid', name='out_age')(out_color) out_type = Dense(4096, activation='relu',)(conv_image) out_type = Dense(512, activation='relu',)(out_sex) out_type = Dense(3, activation='sigmoid', name='out_sex')(out_sex)
model=Model(inputs = image_input, outputs = [out_color, out_type]) return model
然後對模型進行編譯opt=Adadelta() print('\n--- optimizer: %s ---\n'%(opt.__class__.__name__) ) model.compile( optimizer = opt, loss = {'out_color': 'categorical_crossentropy', 'out_type': 'categorical_crossentropy'}, loss_weights = {'out_color': out_color_weight, 'out_type': out_type_weight, metrics = ['accuracy'])最後將數據集載入模型進行訓練和預測
print("[INFO] loading images...")imagePaths = sorted(list(paths.list_images(args["dataset"])))random.seed(42)random.shuffle(imagePaths)
data = []labels = []
for imagePath in imagePaths: image = cv2.imread(imagePath) image = cv2.resize(image, (IMAGE_DIMS[1], IMAGE_DIMS[0])) image = img_to_array(image) data.append(image)
l = label = imagePath.split(os.path.sep)[-2].split("_") labels.append(l)
data = np.array(data, dtype="float") / 255.0labels = np.array(labels)print("[INFO] data matrix: {} images ({:.2f}MB)".format( len(imagePaths), data.nbytes / (1024 * 1000.0)))print(labels)print("[INFO] class labels:")mlb = MultiLabelBinarizer()labels = mlb.fit_transform(labels)print(labels)for (i, label) in enumerate(mlb.classes_): print("{}. {}".format(i + 1, label))
(trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.2, random_state=42)
aug = ImageDataGenerator(rotation_range=25, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode="nearest")
print("[INFO] training network...")H = model.fit_generator( aug.flow(trainX, trainY, batch_size=BS), validation_data=(testX, testY), steps_per_epoch=len(trainX) // BS, epochs=EPOCHS, verbose=1)
使用Lambda層讓你的keras網絡更加靈活keras的lambda層可以算是自定義keras的神器,一起來看下。
keras的Lambda層的導入方法和函數原型:
from keras.layers.core import Lambdakeras.layers.core.Lambda(function, output_shape=None, mask=None, arguments=None)參數的含義:
function: 要實現的函數,該函數僅接受一個變量,即上一層的輸出output_shape: 函數應該返回值的shape,可以是一個tuple,也可以是一個根據輸入shape`mask: 掩膜arguments: 可選參數,字典,用來記錄向函數中傳遞的其他關鍵字參數例子:
model.add(Lambda(lambda x: x ** 2))
def antirectifier(x): x -= K.mean(x, axis=1, keepdims=True) x = K.l2_normalize(x, axis=1) pos = K.relu(x) neg = K.relu(-x) return K.concatenate([pos, neg], axis=1)
def antirectifier_output_shape(input_shape): shape = list(input_shape) assert len(shape) == 2 shape[-1] *= 2 return tuple(shape)
model.add(Lambda(antirectifier, output_shape=antirectifier_output_shape))
import numpy as np from keras.models import Sequential from keras.layers import Dense, Activation,Reshape from keras.layers import merge from keras.utils import plot_modelfrom keras.layers import *from keras.models import Model
def get_slice(x, index): return x[:, index]
keep_num = 3 field_lens = 90input_field = Input(shape=(keep_num, field_lens))avg_pools = []for n in range(keep_num): block = Lambda(get_slice,output_shape=(1,field_lens),arguments={'index':n})(input_field) x_emb = Embedding(input_dim=100, output_dim=200, input_length=field_lens)(block) x_avg = GlobalAveragePooling1D()(x_emb) avg_pools.append(x_avg) output = concatenate([p for p in avg_pools])model = Model(input_field, output) plot_model(model, to_file='model/lambda.png',show_shapes=True)
plt.figure(figsize=(21, 12))im = plt.imread('model/lambda.png')plt.imshow(im)
from keras import backend as Kfrom keras.engine.topology import Layerimport numpy as np
class MyLayer(Layer):
def __init__(self, output_dim, **kwargs): self.output_dim = output_dim super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape): self.kernel = self.add_weight(name='kernel', shape=(input_shape[1], self.output_dim), initializer='uniform', trainable=True) super(MyLayer, self).build(input_shape)
def call(self, x): return K.dot(x, self.kernel)
def compute_output_shape(self, input_shape): return (input_shape[0], self.output_dim)總結一下,keras的Lambda層就是一個層,允許用戶自定義對上層輸入數據的操作,自定義操作通過keras.layers.core.Lambda的function進行。
使用回調函數model的fit()方法有一個參數是callbacks,這個參數可以傳入一些其他待執行的函數,在訓練過程中,每一個epoch會調用一次列表中的callbacks,這樣我們就可以實現很多騷操作在訓練過程中了。
在下面這個例子中設置monitor=』val_acc』來保存訓練過程中驗證集準確率最高的模型
checkpoint = ModelCheckpoint(filepath='./best_model.weights', monitor='val_acc', verbose=1, save_best_only=True)
model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test), callbacks=[checkpoint])
深度學習模型有時需要訓練很長時間,如果中斷了就要親命了。
那麼如何在訓練途中保存想要的模型呢?
ArgName=',epo='+str(epoch)+',bsize='+str(batch_size)+',lr='+str(LearningRate)+',DropRate='+str(DropoutRate)FileNamePy=os.path.basename(__file__).split('.')[-2]checkpoint_filepath = FileNamePy+ArgName
checkpointer_val_best = ModelCheckpoint(filepath=checkpoint_filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max', save_weights_only=True)
callbacks_list = [checkpointer_val_best]
hist=model.fit_generator( train_generator, steps_per_epoch=nb_train_samples, epochs=epoch, validation_data=validation_generator, validation_steps=nb_validation_samples, callbacks = callbacks_list)這種方法雖然簡單,但是有一個明顯的缺點,就是裡邊的指標是由compile的metrics來確定的,而Keres中自定義一個metric,需要寫成張量運算才行,也就是說如果你期望的指標並不能寫成張量運算(比如bleu等指標),那麼就沒法寫成一個metric函數了,也就不能用這個方案了。
by:蘇劍林蘇劍林大神提供了一個方案:自己寫回調器
from keras.callbacks import Callback
def evaluate(): pred = model.predict(x_test) return np.mean(pred.argmax(axis=1) == y_test) class Evaluate(Callback):
def __init__(self): self.accs = [] self.highest = 0.
def on_epoch_end(self, epoch, logs=None): acc = evaluate() self.accs.append(acc) if acc >= self.highest: self.highest = acc model.save_weights('best_model.weights')
print 'acc: %s, highest: %s' % (acc, self.highest)
evaluator = Evaluate()model.fit(x_train, y_train, epochs=10, callbacks=[evaluator])
訓練過程中還有可能對超參數進行微調,比如最常見的一個需求是根據epoch來調整學習率,這可以簡單地通過LearningRateScheduler來實現,它也屬於回調器之一。這個方案也是蘇神的~
from keras.callbacks import LearningRateScheduler
def lr_schedule(epoch): if epoch < 50: lr = 1e-2 elif epoch < 80: lr = 1e-3 else: lr = 1e-4 return lr
lr_scheduler = LearningRateScheduler(lr_schedule)
model.fit(x_train, y_train, epochs=10, callbacks=[evaluator, lr_scheduler])更多例子可以看蘇神博客:https://www.spaces.ac.cn/
輸出優化器的名字下面簡單介紹一個小技巧,可能一些初學者還沒有用過。通常我們import了一個優化器但是需要以字符串的形式知道這個優化器的名字進行一些記錄類的操作,那麼怎麼實現呢?
from keras.optimizers import Adam
opt=Adam()opt_name=opt.__class__.__name__print(opt_name)寫在最後keras可以說是深度學習最容易上手的深度學習框架了。雖然一些不了解它的人會「詬病」它的不靈活性。但其實它是非常靈活的,用蘇神的話來說就是:tf能實現的,keras也可以~
這篇文章叫《Keras使用進階(Ⅰ)》,講道理應該有《Keras使用進階(Ⅱ)》,只是下一篇更新時間的話.嘻嘻~
參考資料&延伸閱讀[keras作者fchollet的學習資源推薦]
(https://github.com/fchollet/keras-resources)
[Multi_Label_Classification_Keras](https://github.com/ItchyHiker/Multi_Label_Classification_Keras)
[keras官方使用自定義層文檔]
(https://keras.io/layers/writing-your-own-keras-layers/)
[讓Keras更酷一些!Keras模型雜談 - 科學空間|Scientific Spaces](https://www.spaces.ac.cn/archives/5765)
[UCF課程:高級計算機視覺(Keras) by Mubarak Shah](https://www.bilibili.com/video/av24577241)
[某位博主寫的Lambda層例子]
(https://www.cnblogs.com/jins-note/p/9734771.html)
[keras官方callbacks文檔]
(https://keras.io/zh/callbacks/)
與我交流github: https://github.com/keloli
blog: https://www.jianshu.com/u/d055ee434e59機器學習算法工程師
一個用心的公眾號
長按,識別,加關注
進群,學習,得幫助
你的關注,我們的熱度,
我們一定給你學習最大的幫助