日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 >

深度学习之基于卷积神经网络(VGG16)实现性别判别

發布時間:2023/12/15 45 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学习之基于卷积神经网络(VGG16)实现性别判别 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

無意間在kaggle上發現的一個數據集,旨在提高網絡模型判別男女的準確率,博主利用遷移學習試驗了多個卷積神經網絡,最終的模型準確率在95%左右。并劃分了訓練集、測試集、驗證集三類,最終在驗證集上的準確率為93.63%.

1.導入庫

import tensorflow as tf import matplotlib.pyplot as plt import os,PIL,pathlib import pandas as pd import numpy as np from tensorflow import keras from tensorflow.keras import layers,models from tensorflow.keras import layers, models, Input from tensorflow.keras.models import Model from tensorflow.keras.layers import Conv2D, Dense, Flatten, Dropout,BatchNormalization,Activation from tensorflow.keras.layers import MaxPooling2D, AveragePooling2D, Concatenate, Lambda,GlobalAveragePooling2D from tensorflow.keras import backend as K# 支持中文 plt.rcParams['font.sans-serif'] = ['SimHei'] # 用來正常顯示中文標簽 plt.rcParams['axes.unicode_minus'] = False # 用來正常顯示負號

2.導入數據

kaggle上下載的原數據一共有20000+張,由于硬件原因,博主選取了4286張作為訓練集和測試集,剩下的作為驗證集。

data_dir = "E:/tmp/.keras/datasets/Man_Women/faces_test" data_dir = pathlib.Path(data_dir) img_count = len(list(data_dir.glob('*/*'))) print(img_count)all_images_paths = list(data_dir.glob('*')) all_images_paths = [str(path) for path in all_images_paths] all_label_names = [path.split("\\")[6].split(".")[0] for path in all_images_paths] print(all_label_names) 4286 ['man', 'woman'] Found 4286 images belonging to 2 classes.

參數設置:

height = 224 width = 224 epochs = 15 batch_size = 32

3.訓練集與測試集

按照8:2的比例劃分訓練集和測試集

train_data_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,#歸一化validation_split=0.2,horizontal_flip=True#進行水平翻轉,作為數據增強 )train_ds = train_data_gen.flow_from_directory(directory=data_dir,target_size=(height,width),batch_size=batch_size,shuffle=True,class_mode='categorical',subset='training' ) test_ds = train_data_gen.flow_from_directory(directory=data_dir,target_size=(height,width),batch_size=batch_size,shuffle=True,class_mode='categorical',subset='validation' ) Found 3430 images belonging to 2 classes. Found 856 images belonging to 2 classes.

圖片展示:

plt.figure(figsize=(15, 10)) # 圖形的寬為15高為10for images, labels in train_ds:for i in range(30):ax = plt.subplot(5, 6, i + 1)plt.imshow(images[i])plt.title(all_label_names[np.argmax(labels[i])])plt.axis("off")break plt.show()

3.遷移學習VGG16網絡

base_model = tf.keras.applications.VGG16(include_top=False, weights="imagenet",input_shape=(height,width,3),pooling = 'max') x = base_model.output x = tf.keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x) x = tf.keras.layers.Dense(256, activation='relu')(x) x = tf.keras.layers.Dropout(rate=.45, seed=123)(x) output = tf.keras.layers.Dense(2, activation='softmax')(x) model=Model(inputs=base_model.input, outputs=output)

設置優化器

# #設置優化器 # #起始學習率 init_learning_rate = 1e-4 lr_sch = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate=init_learning_rate,decay_steps=50,decay_rate=0.96,staircase=True ) gen_optimizer = tf.keras.optimizers.Adam(learning_rate=lr_sch)

網絡編譯&&訓練

model.compile(optimizer=gen_optimizer,loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),metrics=['accuracy'] )history = model.fit(train_ds,epochs=epochs,validation_data=test_ds )

訓練結果如下所示:

最終的模型準確率為95%左右,在博主試驗的這些網絡模型中,VGG16的模型準確率是最高的。

4.混淆矩陣的繪制

網絡保存:

model.save("E:/Users/yqx/PycharmProjects/Man_Women_Rec/model_.h5")

網絡加載:

model = tf.keras.models.load_model("E:/Users/yqx/PycharmProjects/Man_Women_Rec/model.h5")

利用模型對驗證集的數據進行測試:

plt.figure(figsize=(50,50))for images,labels in validation_ds:num = 0total = 0for i in range(64):total += 1ax = plt.subplot(8,8,i+1)plt.imshow(images[i])img_array = tf.expand_dims(images[i],0)pre = model.predict(img_array)if np.argmax(pre) == np.argmax(labels[i]):num += 1plt.title(all_label_names[np.argmax(pre)])plt.axis("off")print(total)print(num)breakplt.suptitle("The acc rating of validation is:{}".format((num / total)))plt.show()


繪制混淆矩陣:

from sklearn.metrics import confusion_matrix import seaborn as sns import pandas as pd# 繪制混淆矩陣 def plot_cm(labels, pre):conf_numpy = confusion_matrix(labels, pre) # 根據實際值和預測值繪制混淆矩陣conf_df = pd.DataFrame(conf_numpy, index=all_label_names,columns=all_label_names) # 將data和all_label_names制成DataFrameplt.figure(figsize=(8, 7))sns.heatmap(conf_df, annot=True, fmt="d", cmap="BuPu") # 將data繪制為混淆矩陣plt.title('混淆矩陣', fontsize=15)plt.ylabel('真實值', fontsize=14)plt.xlabel('預測值', fontsize=14)plt.show()test_pre = [] test_label = [] for images, labels in validation_ds:for image, label in zip(images, labels):img_array = tf.expand_dims(image, 0) # 增加一個維度pre = model.predict(img_array) # 預測結果test_pre.append(all_label_names[np.argmax(pre)]) # 將預測結果傳入列表test_label.append(all_label_names[np.argmax(label)]) # 將真實結果傳入列表break # 由于硬件問題。這里我只用了一個batch,一共128張圖片。 plot_cm(test_label, test_pre) # 繪制混淆矩陣

5.測試驗證集

model = tf.keras.models.load_model("E:/Users/yqx/PycharmProjects/Man_Women_Rec/model.h5") model.evaluate(validation_ds)

最終結果如下所示:

716/716 [==============================] - 418s 584ms/step - loss: 0.5345 - accuracy: 0.9363 [0.5345107175451417, 0.936279]#loss值與acc率

模型準確率比較高。在kaggle上,看到有模型準確率在99%左右,路過的大佬可以試驗一下。

努力加油a啊

總結

以上是生活随笔為你收集整理的深度学习之基于卷积神经网络(VGG16)实现性别判别的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。