日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 人工智能 > 卷积神经网络 >内容正文

卷积神经网络

深度学习之基于卷积神经网络(VGG16)实现性别判别

發(fā)布時(shí)間:2023/12/15 卷积神经网络 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学习之基于卷积神经网络(VGG16)实现性别判别 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

無(wú)意間在kaggle上發(fā)現(xiàn)的一個(gè)數(shù)據(jù)集,旨在提高網(wǎng)絡(luò)模型判別男女的準(zhǔn)確率,博主利用遷移學(xué)習(xí)試驗(yàn)了多個(gè)卷積神經(jīng)網(wǎng)絡(luò),最終的模型準(zhǔn)確率在95%左右。并劃分了訓(xùn)練集、測(cè)試集、驗(yàn)證集三類,最終在驗(yàn)證集上的準(zhǔn)確率為93.63%.

1.導(dǎo)入庫(kù)

import tensorflow as tf import matplotlib.pyplot as plt import os,PIL,pathlib import pandas as pd import numpy as np from tensorflow import keras from tensorflow.keras import layers,models from tensorflow.keras import layers, models, Input from tensorflow.keras.models import Model from tensorflow.keras.layers import Conv2D, Dense, Flatten, Dropout,BatchNormalization,Activation from tensorflow.keras.layers import MaxPooling2D, AveragePooling2D, Concatenate, Lambda,GlobalAveragePooling2D from tensorflow.keras import backend as K# 支持中文 plt.rcParams['font.sans-serif'] = ['SimHei'] # 用來(lái)正常顯示中文標(biāo)簽 plt.rcParams['axes.unicode_minus'] = False # 用來(lái)正常顯示負(fù)號(hào)

2.導(dǎo)入數(shù)據(jù)

kaggle上下載的原數(shù)據(jù)一共有20000+張,由于硬件原因,博主選取了4286張作為訓(xùn)練集和測(cè)試集,剩下的作為驗(yàn)證集。

data_dir = "E:/tmp/.keras/datasets/Man_Women/faces_test" data_dir = pathlib.Path(data_dir) img_count = len(list(data_dir.glob('*/*'))) print(img_count)all_images_paths = list(data_dir.glob('*')) all_images_paths = [str(path) for path in all_images_paths] all_label_names = [path.split("\\")[6].split(".")[0] for path in all_images_paths] print(all_label_names) 4286 ['man', 'woman'] Found 4286 images belonging to 2 classes.

參數(shù)設(shè)置:

height = 224 width = 224 epochs = 15 batch_size = 32

3.訓(xùn)練集與測(cè)試集

按照8:2的比例劃分訓(xùn)練集和測(cè)試集

train_data_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,#歸一化validation_split=0.2,horizontal_flip=True#進(jìn)行水平翻轉(zhuǎn),作為數(shù)據(jù)增強(qiáng) )train_ds = train_data_gen.flow_from_directory(directory=data_dir,target_size=(height,width),batch_size=batch_size,shuffle=True,class_mode='categorical',subset='training' ) test_ds = train_data_gen.flow_from_directory(directory=data_dir,target_size=(height,width),batch_size=batch_size,shuffle=True,class_mode='categorical',subset='validation' ) Found 3430 images belonging to 2 classes. Found 856 images belonging to 2 classes.

圖片展示:

plt.figure(figsize=(15, 10)) # 圖形的寬為15高為10for images, labels in train_ds:for i in range(30):ax = plt.subplot(5, 6, i + 1)plt.imshow(images[i])plt.title(all_label_names[np.argmax(labels[i])])plt.axis("off")break plt.show()

3.遷移學(xué)習(xí)VGG16網(wǎng)絡(luò)

base_model = tf.keras.applications.VGG16(include_top=False, weights="imagenet",input_shape=(height,width,3),pooling = 'max') x = base_model.output x = tf.keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x) x = tf.keras.layers.Dense(256, activation='relu')(x) x = tf.keras.layers.Dropout(rate=.45, seed=123)(x) output = tf.keras.layers.Dense(2, activation='softmax')(x) model=Model(inputs=base_model.input, outputs=output)

設(shè)置優(yōu)化器

# #設(shè)置優(yōu)化器 # #起始學(xué)習(xí)率 init_learning_rate = 1e-4 lr_sch = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate=init_learning_rate,decay_steps=50,decay_rate=0.96,staircase=True ) gen_optimizer = tf.keras.optimizers.Adam(learning_rate=lr_sch)

網(wǎng)絡(luò)編譯&&訓(xùn)練

model.compile(optimizer=gen_optimizer,loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),metrics=['accuracy'] )history = model.fit(train_ds,epochs=epochs,validation_data=test_ds )

訓(xùn)練結(jié)果如下所示:

最終的模型準(zhǔn)確率為95%左右,在博主試驗(yàn)的這些網(wǎng)絡(luò)模型中,VGG16的模型準(zhǔn)確率是最高的。

4.混淆矩陣的繪制

網(wǎng)絡(luò)保存:

model.save("E:/Users/yqx/PycharmProjects/Man_Women_Rec/model_.h5")

網(wǎng)絡(luò)加載:

model = tf.keras.models.load_model("E:/Users/yqx/PycharmProjects/Man_Women_Rec/model.h5")

利用模型對(duì)驗(yàn)證集的數(shù)據(jù)進(jìn)行測(cè)試:

plt.figure(figsize=(50,50))for images,labels in validation_ds:num = 0total = 0for i in range(64):total += 1ax = plt.subplot(8,8,i+1)plt.imshow(images[i])img_array = tf.expand_dims(images[i],0)pre = model.predict(img_array)if np.argmax(pre) == np.argmax(labels[i]):num += 1plt.title(all_label_names[np.argmax(pre)])plt.axis("off")print(total)print(num)breakplt.suptitle("The acc rating of validation is:{}".format((num / total)))plt.show()


繪制混淆矩陣:

from sklearn.metrics import confusion_matrix import seaborn as sns import pandas as pd# 繪制混淆矩陣 def plot_cm(labels, pre):conf_numpy = confusion_matrix(labels, pre) # 根據(jù)實(shí)際值和預(yù)測(cè)值繪制混淆矩陣conf_df = pd.DataFrame(conf_numpy, index=all_label_names,columns=all_label_names) # 將data和all_label_names制成DataFrameplt.figure(figsize=(8, 7))sns.heatmap(conf_df, annot=True, fmt="d", cmap="BuPu") # 將data繪制為混淆矩陣plt.title('混淆矩陣', fontsize=15)plt.ylabel('真實(shí)值', fontsize=14)plt.xlabel('預(yù)測(cè)值', fontsize=14)plt.show()test_pre = [] test_label = [] for images, labels in validation_ds:for image, label in zip(images, labels):img_array = tf.expand_dims(image, 0) # 增加一個(gè)維度pre = model.predict(img_array) # 預(yù)測(cè)結(jié)果test_pre.append(all_label_names[np.argmax(pre)]) # 將預(yù)測(cè)結(jié)果傳入列表test_label.append(all_label_names[np.argmax(label)]) # 將真實(shí)結(jié)果傳入列表break # 由于硬件問題。這里我只用了一個(gè)batch,一共128張圖片。 plot_cm(test_label, test_pre) # 繪制混淆矩陣

5.測(cè)試驗(yàn)證集

model = tf.keras.models.load_model("E:/Users/yqx/PycharmProjects/Man_Women_Rec/model.h5") model.evaluate(validation_ds)

最終結(jié)果如下所示:

716/716 [==============================] - 418s 584ms/step - loss: 0.5345 - accuracy: 0.9363 [0.5345107175451417, 0.936279]#loss值與acc率

模型準(zhǔn)確率比較高。在kaggle上,看到有模型準(zhǔn)確率在99%左右,路過的大佬可以試驗(yàn)一下。

努力加油a啊

總結(jié)

以上是生活随笔為你收集整理的深度学习之基于卷积神经网络(VGG16)实现性别判别的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。