日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

基于Keras的CNN/Densenet实现分类

發(fā)布時間:2023/11/27 生活经验 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 基于Keras的CNN/Densenet实现分类 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

1. 定義網(wǎng)絡(luò)的基本參數(shù)

定義輸入網(wǎng)絡(luò)的是什么:

input = Input(shape=(240, 640, 3))

反向傳播時梯度下降算法

SGD一定會收斂,但是速度慢
Adam速度快但是可能不收斂
[link](https://blog.csdn.net/wydbyxr/article/details/84822806)
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
adam = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)

學(xué)習(xí)率

lrate = LearningRateScheduler(step_decay)

其中step_decay函數(shù)是自動學(xué)習(xí)率算法

def step_decay(epoch):init_lrate = 0.001drop = 0.5epochs_drop = 10lrate = init_lrate * pow(drop, np.floor(1 + epoch) / epochs_drop)print('learning rate: ', lrate)return lrate

設(shè)置batch size

batch_size = 32 //一次丟給網(wǎng)絡(luò)的數(shù)目

讀入訓(xùn)練集和驗證集

train_lines = []
val_lines = []
with open('data.txt') as file:lines = file.readlines()for line in lines:line = line.strip()line = line.split(' ')train_lines.append([line[0], line[1]])with open('val.txt') as file:lines = file.readlines()for line in lines:line = line.strip()line = line.split(' ')val_lines.append([line[0], line[1]])/*txt中是這樣的:數(shù)字是第幾類
moredata/1/001.png 1
moredata/1/002.png 1
moredata/1/003.png 2
moredata/1/004.png 3
moredata/1/005.png 4
moredata/1/006.png 1
moredata/1/007.png 1
moredata/1/008.png 1
moredata/1/009.png 1
……
*/

2. 定義網(wǎng)絡(luò)結(jié)構(gòu)

CNN

def cnn(input):**三層卷積層**conv1 = Conv2D(16, kernel_size=(3, 3), padding='same', activation='relu')(input)# conv1a = BatchNormalization()(conv1a) // BN層,防止飄移,過擬合conv1 = Conv2D(16, kernel_size=(3, 3), padding='same', activation='relu')(conv1)pool1 = MaxPooling2D((2, 2))(conv1)conv2 = Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu')(pool1)# conv1a = BatchNormalization()(conv1a)conv2 = Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu')(conv2)pool2 = MaxPooling2D((2, 2))(conv2)conv3 = Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu')(pool2)# conv1a = BatchNormalization()(conv1a)conv3 = Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu')(conv3)pool3 = MaxPooling2D((2, 2))(conv3)flatten = Flatten()(pool3) //拉成一列dense1 = Dense(1024, activation='relu')(flatten) //全連接層dense2 = Dense(3, activation='softmax')(dense1)model = Model(inputs=input, outputs=dense2)return model

Densenet

def dense(input):conv1a = Conv2D(16, kernel_size=(3, 3), padding='same', activation='relu')(input)# conv1a = BatchNormalization()(conv1a)conv1b = Conv2D(16, kernel_size=(3, 3), padding='same', activation='relu')(conv1a)# conv1b = BatchNormalization()(conv1b)merge1 = concatenate([conv1a, conv1b], axis=-1)pool1 = MaxPooling2D(pool_size=(2, 2))(merge1)conv2a = Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu')(pool1)# conv2a = BatchNormalization()(conv2a)conv2b = Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu')(conv2a)# conv2b = BatchNormalization()(conv2b)merge2 = concatenate([conv2a, conv2b], axis=-1)pool2 = MaxPooling2D(pool_size=(2, 2))(merge2)conv3a = Conv2D(64, kernel_size=(3, 3), padding='same', activation='relu')(pool2)# conv3a = BatchNormalization()(conv3a)conv3b = Conv2D(64, kernel_size=(3, 3), padding='same', activation='relu')(conv3a)# conv3b = BatchNormalization()(conv3b)merge3 = concatenate([conv3a, conv3b], axis=-1)pool3 = MaxPooling2D(pool_size=(2, 2))(merge3)conv4a = Conv2D(64, kernel_size=(3, 3), padding='same', activation='relu')(pool3)# conv4a = BatchNormalization()(conv4a)conv4b = Conv2D(64, kernel_size=(3, 3), padding='same', activation='relu')(conv4a)# conv4b = BatchNormalization()(conv4b)merge4 = concatenate([conv4a, conv4b], axis=-1)pool4 = MaxPooling2D(pool_size=(2, 2))(merge4)flatten = Flatten()(pool4)dense1 = Dense(128, activation='sigmoid')(flatten)dense2 = Dropout(0.25)(dense1)output = Dense(3, activation='softmax')(dense2)model = Model(inputs=input, outputs=output)return model

3. 開始訓(xùn)練吧

輸入的參數(shù)也定義好了,網(wǎng)絡(luò)結(jié)構(gòu)也搭建好了,就可以訓(xùn)練了

調(diào)用網(wǎng)絡(luò)

model = cnn(input)

記錄網(wǎng)絡(luò)的參數(shù)信息,保留模型的最佳參數(shù),保存到h5文件

ck_name = 'training/cnn' + '.h5'
checkpoint = ModelCheckpoint(ck_name,monitor='val_acc', //val_acc 或 val_loss 或 acc 或 lossverbose=1, //0或1。為1表示輸出epoch模型保存信息,默認(rèn)為0表示不輸出該信息save_best_only=True,//是否保存最佳模型mode='auto') //min max auto
history = model.fit_generator(generator=data_generator(train_lines, batch_size),steps_per_epoch=len(train_lines) // batch_size,epochs=100,verbose=2, //日志顯示https://blog.csdn.net/Hodors/article/details/97500808callbacks=[lrate, checkpoint],validation_data=data_generator(val_lines, batch_size),validation_steps=len(val_lines) // batch_size)

記錄網(wǎng)絡(luò)結(jié)構(gòu)保存成json文件

model_json = model.to_json()
with open(os.path.join('training/cnn.json'), 'w') as json_file:json_file.write(model_json)

運行網(wǎng)絡(luò)

model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
// categorical_crossentropy 交叉熵?fù)p失函數(shù)
// 梯度優(yōu)化器是SGD
history = model.fit_generator(generator=data_generator(train_lines, batch_size),steps_per_epoch=len(train_lines) // batch_size,epochs=100,verbose=2,callbacks=[lrate, checkpoint],validation_data=data_generator(val_lines, batch_size),validation_steps=len(val_lines) // batch_size)

fit_generator是Python 生成器(或 Sequence 實例)逐批生成的數(shù)據(jù),按批次訓(xùn)練模型
其中data_generator是自寫的,主要是對輸入圖像或數(shù)據(jù)的處理

將圖像分為2類,txt中標(biāo)的是1, 2,3,4, 5。我們想讓前3個是一類,后兩個是一類,用的獨熱編碼。每張圖轉(zhuǎn)換為灰度圖,然后三張圖疊在一起作為一張圖,這有點類似于LSTM的思想,使其有時間序列
def data_generator(data_list, batch_size=32):get_one_hot = {'1': [1, 0, 0], '2': [1, 0, 0], '3': [1, 0, 0],'4': [0, 1, 0], '5': [0, 0, 1]}while True:for i in range(0, len(data_list) - 3, batch_size):labels = []images = []data_list_batch = data_list[i:i + batch_size + 3]# print(data_list_batch)for j in range(len(data_list_batch) - 3):# print(data_list_batch[j][0])images_3 = []image1 = cv2.imread(data_list_batch[j][0])image1 = image1[110:590, :, :]image1 = cv2.resize(image1, (240, 640))image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)image1 = (image1 - image1.min()) / (image1.max() - image1.min())image2 = cv2.imread(data_list_batch[j + 1][0])image2 = image2[110:590, :, :]image2 = cv2.resize(image2, (240, 640))image2 = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)image2 = (image2 - image2.min()) / (image2.max() - image2.min())image3 = cv2.imread(data_list_batch[j + 2][0])image3 = image3[110:590, :, :]image3 = cv2.resize(image3, (240, 640))image3 = cv2.cvtColor(image3, cv2.COLOR_BGR2GRAY)image3 = (image3 - image3.min()) / (image3.max() - image3.min())images_3.append(image1)images_3.append(image2)images_3.append(image3)images_3 = np.asarray(images_3).reshape((image1.shape[0], image1.shape[1], 3))images.append(images_3)label = get_one_hot[data_list_batch[j + 2][1]]labels.append(label)images = np.asarray(images).reshape([len(data_list_batch) - 3, 240, 640, 3])labels = np.asarray(labels).reshape([len(data_list_batch) - 3, 3])yield images, labels

結(jié)束訓(xùn)練

4. 訓(xùn)練結(jié)果可視化

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)//畫圖
plt.figure()
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.savefig('training/accuary_cnn' + '.png')plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.savefig('training/loss_cnn' + '.png')

Code

from keras import backend as K
from keras.models import Model, Sequential
from keras.layers import Activation
from keras.layers import AveragePooling2D
from keras.layers import BatchNormalization
from keras.layers import Concatenate
from keras.layers import Conv2D, Dense, Dropout
from keras.layers import Dense, Add
from keras.layers import concatenate, LSTM
from keras.layers import GlobalAveragePooling2D
from keras.layers import GlobalMaxPooling2D
from keras.layers import Input, Flatten
from keras.layers import MaxPooling2D
import os
import matplotlib.pyplot as plt
from keras import regularizers, optimizers
import numpy as np
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
import cv2def cnn(input):conv1 = Conv2D(16, kernel_size=(3, 3), padding='same', activation='relu')(input)# conv1a = BatchNormalization()(conv1a)conv1 = Conv2D(16, kernel_size=(3, 3), padding='same', activation='relu')(conv1)pool1 = MaxPooling2D((2, 2))(conv1)conv2 = Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu')(pool1)# conv1a = BatchNormalization()(conv1a)conv2 = Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu')(conv2)pool2 = MaxPooling2D((2, 2))(conv2)conv3 = Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu')(pool2)# conv1a = BatchNormalization()(conv1a)conv3 = Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu')(conv3)pool3 = MaxPooling2D((2, 2))(conv3)flatten = Flatten()(pool3)dense1 = Dense(1024, activation='relu')(flatten)dense2 = Dense(3, activation='softmax')(dense1)model = Model(inputs=input, outputs=dense2)return modeldef dense(input):# input = Input(shape=(128, 128, 3))conv1a = Conv2D(16, kernel_size=(3, 3), padding='same', activation='relu')(input)# conv1a = BatchNormalization()(conv1a)conv1b = Conv2D(16, kernel_size=(3, 3), padding='same', activation='relu')(conv1a)# conv1b = BatchNormalization()(conv1b)merge1 = concatenate([conv1a, conv1b], axis=-1)pool1 = MaxPooling2D(pool_size=(2, 2))(merge1)conv2a = Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu')(pool1)# conv2a = BatchNormalization()(conv2a)conv2b = Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu')(conv2a)# conv2b = BatchNormalization()(conv2b)merge2 = concatenate([conv2a, conv2b], axis=-1)pool2 = MaxPooling2D(pool_size=(2, 2))(merge2)conv3a = Conv2D(64, kernel_size=(3, 3), padding='same', activation='relu')(pool2)# conv3a = BatchNormalization()(conv3a)conv3b = Conv2D(64, kernel_size=(3, 3), padding='same', activation='relu')(conv3a)# conv3b = BatchNormalization()(conv3b)merge3 = concatenate([conv3a, conv3b], axis=-1)pool3 = MaxPooling2D(pool_size=(2, 2))(merge3)conv4a = Conv2D(64, kernel_size=(3, 3), padding='same', activation='relu')(pool3)# conv4a = BatchNormalization()(conv4a)conv4b = Conv2D(64, kernel_size=(3, 3), padding='same', activation='relu')(conv4a)# conv4b = BatchNormalization()(conv4b)merge4 = concatenate([conv4a, conv4b], axis=-1)pool4 = MaxPooling2D(pool_size=(2, 2))(merge4)flatten = Flatten()(pool4)dense1 = Dense(128, activation='sigmoid')(flatten)dense2 = Dropout(0.25)(dense1)output = Dense(3, activation='softmax')(dense2)model = Model(inputs=input, outputs=output)return modeldef data_generator(data_list, batch_size=32):get_one_hot = {'1': [1, 0, 0], '2': [1, 0, 0], '3': [1, 0, 0],'4': [0, 1, 0], '5': [0, 0, 1]}mini_img = 3while True:for i in range(0, len(data_list) - mini_img, batch_size):labels = []images = []data_list_batch = data_list[i:i + batch_size + mini_img]for j in range(len(data_list_batch) - mini_img):# print(data_list_batch[j][0])images_3 = []image1 = cv2.imread(data_list_batch[j][0])image1 = image1[110:590, :, :]image1 = cv2.resize(image1, (240, 640))image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)image1 = (image1 - image1.min()) / (image1.max() - image1.min())image2 = cv2.imread(data_list_batch[j + 1][0])image2 = image2[110:590, :, :]image2 = cv2.resize(image2, (240, 640))image2 = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)image2 = (image2 - image2.min()) / (image2.max() - image2.min())image3 = cv2.imread(data_list_batch[j + 2][0])image3 = image3[110:590, :, :]image3 = cv2.resize(image3, (240, 640))image3 = cv2.cvtColor(image3, cv2.COLOR_BGR2GRAY)image3 = (image3 - image3.min()) / (image3.max() - image3.min())images_3.append(image1)images_3.append(image2)images_3.append(image3)images_3 = np.asarray(images_3).reshape((image1.shape[0], image1.shape[1], 3))images.append(images_3)label = get_one_hot[data_list_batch[j + 2][1]]labels.append(label)images = np.asarray(images).reshape([len(data_list_batch) - mini_img, 240, 640, 3])labels = np.asarray(labels).reshape([len(data_list_batch) - mini_img, mini_img])yield images, labelsdef step_decay(epoch):init_lrate = 0.001drop = 0.5epochs_drop = 10lrate = init_lrate * pow(drop, np.floor(1 + epoch) / epochs_drop)print('learning rate: ', lrate)return lrateinput = Input(shape=(240, 640, 3))
ck_name = 'training/cnn' + '.h5'
checkpoint = ModelCheckpoint(ck_name,monitor='val_acc',verbose=1,save_best_only=True,mode='auto')sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
adam = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
lrate = LearningRateScheduler(step_decay)
batch_size = 32
train_lines = []
val_lines = []
with open('data.txt') as file:lines = file.readlines()for line in lines:line = line.strip()line = line.split(' ')train_lines.append([line[0], line[1]])with open('val.txt') as file:lines = file.readlines()for line in lines:line = line.strip()line = line.split(' ')val_lines.append([line[0], line[1]])
# print(train_lines[0])
# # model = dense(input)
# model = resnet(input)
model = cnn(input)model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])history = model.fit_generator(generator=data_generator(train_lines, batch_size),steps_per_epoch=len(train_lines) // batch_size,epochs=100,verbose=2,callbacks=[lrate, checkpoint],validation_data=data_generator(val_lines, batch_size),validation_steps=len(val_lines) // batch_size)
# for layer in model.layers:
#     layer.trainable = Truemodel_json = model.to_json()
with open(os.path.join('training/cnn.json'), 'w') as json_file:json_file.write(model_json)acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)plt.figure()
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.savefig('training/accuary_cnn' + '.png')plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.savefig('training/loss_cnn' + '.png')

總結(jié)

以上是生活随笔為你收集整理的基于Keras的CNN/Densenet实现分类的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。