CV:基于Keras利用CNN主流架构之mini_XCEPTION训练情感分类模型hdf5并保存到指定文件夹下
生活随笔
收集整理的這篇文章主要介紹了
CV:基于Keras利用CNN主流架构之mini_XCEPTION训练情感分类模型hdf5并保存到指定文件夹下
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
CV:基于Keras利用CNN主流架構(gòu)之mini_XCEPTION訓(xùn)練情感分類模型hdf5并保存到指定文件夾下
?
?
目錄
圖示過(guò)程
核心代碼
?
?
?
圖示過(guò)程
?
核心代碼
def mini_XCEPTION(input_shape, num_classes, l2_regularization=0.01):regularization = l2(l2_regularization)# baseimg_input = Input(input_shape)x = Conv2D(8, (3, 3), strides=(1, 1), kernel_regularizer=regularization,use_bias=False)(img_input)x = BatchNormalization()(x)x = Activation('relu')(x)x = Conv2D(8, (3, 3), strides=(1, 1), kernel_regularizer=regularization,use_bias=False)(x)x = BatchNormalization()(x)x = Activation('relu')(x)# module 1residual = Conv2D(16, (1, 1), strides=(2, 2),padding='same', use_bias=False)(x)residual = BatchNormalization()(residual)x = SeparableConv2D(16, (3, 3), padding='same',kernel_regularizer=regularization,use_bias=False)(x)x = BatchNormalization()(x)x = Activation('relu')(x)x = SeparableConv2D(16, (3, 3), padding='same',kernel_regularizer=regularization,use_bias=False)(x)x = BatchNormalization()(x)x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)x = layers.add([x, residual])# module 2residual = Conv2D(32, (1, 1), strides=(2, 2),padding='same', use_bias=False)(x)residual = BatchNormalization()(residual)x = SeparableConv2D(32, (3, 3), padding='same',kernel_regularizer=regularization,use_bias=False)(x)x = BatchNormalization()(x)x = Activation('relu')(x)x = SeparableConv2D(32, (3, 3), padding='same',kernel_regularizer=regularization,use_bias=False)(x)x = BatchNormalization()(x)x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)x = layers.add([x, residual])# module 3residual = Conv2D(64, (1, 1), strides=(2, 2),padding='same', use_bias=False)(x)residual = BatchNormalization()(residual)x = SeparableConv2D(64, (3, 3), padding='same',kernel_regularizer=regularization,use_bias=False)(x)x = BatchNormalization()(x)x = Activation('relu')(x)x = SeparableConv2D(64, (3, 3), padding='same',kernel_regularizer=regularization,use_bias=False)(x)x = BatchNormalization()(x)x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)x = layers.add([x, residual])# module 4residual = Conv2D(128, (1, 1), strides=(2, 2),padding='same', use_bias=False)(x)residual = BatchNormalization()(residual)x = SeparableConv2D(128, (3, 3), padding='same',kernel_regularizer=regularization,use_bias=False)(x)x = BatchNormalization()(x)x = Activation('relu')(x)x = SeparableConv2D(128, (3, 3), padding='same',kernel_regularizer=regularization,use_bias=False)(x)x = BatchNormalization()(x)x = MaxPooling2D((3, 3), strides=(2, 2), padding='same')(x)x = layers.add([x, residual])x = Conv2D(num_classes, (3, 3),#kernel_regularizer=regularization,padding='same')(x)x = GlobalAveragePooling2D()(x)output = Activation('softmax',name='predictions')(x)model = Model(img_input, output)return model?
#CV:利用CNN主流架構(gòu)之一的XCEPTION訓(xùn)練情感分類模型.hdf5并保存到指定文件夾下邊 from keras.callbacks import CSVLogger, ModelCheckpoint, EarlyStopping from keras.callbacks import ReduceLROnPlateau from keras.preprocessing.image import ImageDataGeneratorfrom models.cnn import mini_XCEPTION# parameters 1、定義參數(shù):每個(gè)batch的采樣本數(shù)、訓(xùn)練輪數(shù)、輸入shape、部分比例分離用于驗(yàn)證、冗長(zhǎng)參數(shù)、分類個(gè)數(shù)、patience、loghdf5保存路徑 batch_size = 32 #整數(shù),指定進(jìn)行梯度下降時(shí)每個(gè)batch包含的樣本數(shù)。訓(xùn)練時(shí)一個(gè)batch的樣本會(huì)被計(jì)算一次梯度下降,使目標(biāo)函數(shù)優(yōu)化一步。 num_epochs = 10000 #整數(shù),訓(xùn)練終止時(shí)的epoch值,訓(xùn)練將在達(dá)到該epoch值時(shí)停止,當(dāng)沒有設(shè)置initial_epoch時(shí),它就是訓(xùn)練的總輪數(shù),否則訓(xùn)練的總輪數(shù)為epochs - inital_epoch input_shape = (64, 64, 1) validation_split = .2 #0~1之間的浮點(diǎn)數(shù),用來(lái)指定訓(xùn)練集的一定比例數(shù)據(jù)作為驗(yàn)證集。驗(yàn)證集將不參與訓(xùn)練,并在每個(gè)epoch結(jié)束后測(cè)試的模型的指標(biāo),如損失函數(shù)、精確度等。 verbose = 1 #日志顯示,0為不在標(biāo)準(zhǔn)輸出流輸出日志信息,1為輸出進(jìn)度條記錄,2為每個(gè)epoch輸出一行記錄 num_classes = 7 patience = 50 #當(dāng)monitor不再有改善的時(shí)候就會(huì)停止訓(xùn)練,這個(gè)可以通過(guò)patience看出來(lái) base_path = '../trained_models/emotion_models/'# data generator調(diào)用ImageDataGenerator函數(shù)實(shí)現(xiàn)實(shí)時(shí)數(shù)據(jù)增強(qiáng)生成小批量的圖像數(shù)據(jù)。 data_generator = ImageDataGenerator(featurewise_center=False,featurewise_std_normalization=False,rotation_range=10,width_shift_range=0.1,height_shift_range=0.1,zoom_range=.1,horizontal_flip=True)# model parameters/compilation2、建立XCEPTION模型并compile編譯配置參數(shù),最后輸出網(wǎng)絡(luò)摘要 model = mini_XCEPTION(input_shape, num_classes) #mini_XCEPTION函數(shù)(XCEPTION是屬于CNN下目前最新的一種模型)實(shí)現(xiàn)輸入形狀、分類個(gè)數(shù)兩個(gè)參數(shù)建立模型 model.compile(optimizer='adam', loss='categorical_crossentropy', #model.compile函數(shù)(屬于keras庫(kù))用來(lái)配置訓(xùn)練模型參數(shù),可以指定你設(shè)想的隨機(jī)梯度下降中的網(wǎng)絡(luò)的損失函數(shù)、優(yōu)化方式等參數(shù)metrics=['accuracy']) model.summary() #Prints a string summary of the network.#3、指定要訓(xùn)練的數(shù)據(jù)集(emotion→fer2013即喜怒哀樂(lè)數(shù)據(jù)集) datasets = ['fer2013'] #4、for循環(huán)實(shí)現(xiàn)callbacks、loading dataset for dataset_name in datasets: print('Training dataset:', dataset_name)# callbacks回調(diào):通過(guò)調(diào)用CSVLogger、EarlyStopping、ReduceLROnPlateau、ModelCheckpoint等函數(shù)得到訓(xùn)練參數(shù)存到一個(gè)list內(nèi)log_file_path = base_path + dataset_name + '_emotion_training.log'csv_logger = CSVLogger(log_file_path, append=False) #Callback that streams epoch results to a csv file.early_stop = EarlyStopping('val_loss', patience=patience) #Stop training when a monitored quantity has stopped improving.reduce_lr = ReduceLROnPlateau('val_loss', factor=0.1, #Reduce learning rate when a metric has stopped improving.patience=int(patience/4), verbose=1)trained_models_path = base_path + dataset_name + '_mini_XCEPTION'model_names = trained_models_path + '.{epoch:02d}-{val_acc:.2f}.hdf5'model_checkpoint = ModelCheckpoint(model_names, 'val_loss', verbose=1, #Save the model after every epochsave_best_only=True)callbacks = [model_checkpoint, csv_logger, early_stop, reduce_lr] ## loading dataset加載數(shù)據(jù)集:通過(guò)調(diào)用DataManager、data_loader = DataManager(dataset_name, image_size=input_shape[:2]) #自定義DataManager函數(shù)實(shí)現(xiàn)根據(jù)數(shù)據(jù)集name進(jìn)行加載faces, emotions = data_loader.get_data() #自定義get_data函數(shù)根據(jù)不同數(shù)據(jù)集name得到各自的ground truth data,faces = preprocess_input(faces) #自定義preprocess_input函數(shù):處理輸入的數(shù)據(jù),先轉(zhuǎn)為float32類型然后/ 255.0num_samples, num_classes = emotions.shape #shape函數(shù)讀取矩陣的長(zhǎng)度train_data, val_data = split_data(faces, emotions, validation_split) #自定義split_data對(duì)數(shù)據(jù)整理各取所得train_data、 val_data train_faces, train_emotions = train_data#training model調(diào)用fit_generator函數(shù)訓(xùn)練模型model.fit_generator(data_generator.flow(train_faces, train_emotions, #flow函數(shù)返回Numpy Array Iterator迭代batch_size),steps_per_epoch=len(train_faces) / batch_size,epochs=num_epochs, verbose=1, callbacks=callbacks,validation_data=val_data) #fit_generator函數(shù)Fits the model on data generated batch-by-batch by a Python generator?
總結(jié)
以上是生活随笔為你收集整理的CV:基于Keras利用CNN主流架构之mini_XCEPTION训练情感分类模型hdf5并保存到指定文件夹下的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: CV:利用cv2(加载人脸
- 下一篇: CV:基于Keras利用CNN主流架构之