日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > pytorch >内容正文

pytorch

基于深度学习的交通标识别算法对比研究-TensorFlow2实现

發布時間:2025/4/5 pytorch 63 豆豆
生活随笔 收集整理的這篇文章主要介紹了 基于深度学习的交通标识别算法对比研究-TensorFlow2实现 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
  • 🔗 運行環境:python3
  • 🚩 作者:K同學啊
  • 🥇 精選專欄:《深度學習100例》
  • 🔥 推薦專欄:《新手入門深度學習》
  • 📚 選自專欄:《Matplotlib教程》
  • 🧿 優秀專欄:《Python入門100題》

大家好,我是K同學啊!

今天和大家分享一篇 本科畢設 實戰項目,項目中我將使用VGG16、InceptionV3、DenseNet121、MobileNetV2 等四個模型進行對比分析(文中提供了每一個模型的 算法框架圖),最后可以自選圖片進行預測,最后的識別效果高達 99.2%。結果如下:

文章目錄

    • 一、導入數據
    • 二、定義模型
      • 1. VGG16模型
      • 2. InceptionV3模型
      • 3. DenseNet121算法模型
      • 4. MobileNetV2算法模型
    • 三、結果分析
      • 1. 準確率對比分析
      • 2. 損失函數對比分析
      • 3. 混淆矩陣
      • 4. 評估指標生成
    • 四、指定圖片進行預測

一、導入數據

""" 關于image_dataset_from_directory()的詳細介紹可以參考文章:https://mtyjkh.blog.csdn.net/article/details/117018789 """ train_ds = tf.keras.preprocessing.image_dataset_from_directory("./1-data/",validation_split=0.2,subset="training",seed=12,image_size=(img_height, img_width),batch_size=batch_size) Found 1308 files belonging to 14 classes. Using 1047 files for training. """ 關于image_dataset_from_directory()的詳細介紹可以參考文章:https://mtyjkh.blog.csdn.net/article/details/117018789 """ val_ds = tf.keras.preprocessing.image_dataset_from_directory("./1-data/",validation_split=0.2,subset="validation",seed=12,image_size=(img_height, img_width),batch_size=batch_size) Found 1308 files belonging to 14 classes. Using 261 files for validation. class_names = train_ds.class_names print(class_names) ['15', '16', '17', '20', '22', '23', '24', '26', '27', '28', '29', '30', '31', '32'] train_ds <BatchDataset shapes: ((None, 224, 224, 3), (None,)), types: (tf.float32, tf.int32)> AUTOTUNE = tf.data.AUTOTUNE# 歸一化 def train_preprocessing(image,label):return (image/255.0,label)train_ds = (train_ds.cache().map(train_preprocessing) # 這里可以設置預處理函數.prefetch(buffer_size=AUTOTUNE) )val_ds = (val_ds.cache().map(train_preprocessing) # 這里可以設置預處理函數.prefetch(buffer_size=AUTOTUNE) ) plt.figure(figsize=(10, 8)) # 圖形的寬為10高為5for images, labels in train_ds.take(1):for i in range(15):plt.subplot(4, 5, i + 1)plt.xticks([])plt.yticks([])plt.grid(False)# 顯示圖片plt.imshow(images[i])# 顯示標簽plt.xlabel(class_names[int(labels[i])])plt.show()

二、定義模型

1. VGG16模型

# 加載預訓練模型 vgg16_base_model = tf.keras.applications.vgg16.VGG16(weights='imagenet',include_top=False,# input_tensor=tf.keras.Input(shape=(img_width, img_height, 3)),input_shape=(img_width, img_height, 3),pooling='max') for layer in vgg16_base_model.layers:layer.trainable = FalseX = vgg16_base_model.output X = Dropout(0.4)(X)output = Dense(len(class_names), activation='softmax')(X) vgg16_model = Model(inputs=vgg16_base_model.input, outputs=output)vgg16_model.compile(optimizer="adam",loss='sparse_categorical_crossentropy',metrics=['accuracy']) # vgg16_model.summary() vgg16_history = vgg16_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds) Epoch 1/10 33/33 [==============================] - 8s 113ms/step - loss: 2.7396 - accuracy: 0.2531 - val_loss: 1.4678 - val_accuracy: 0.6092 Epoch 2/10 33/33 [==============================] - 2s 45ms/step - loss: 1.5873 - accuracy: 0.5091 - val_loss: 0.8500 - val_accuracy: 0.8046 Epoch 3/10 33/33 [==============================] - 2s 45ms/step - loss: 1.0996 - accuracy: 0.6495 - val_loss: 0.5299 - val_accuracy: 0.9272 Epoch 4/10 33/33 [==============================] - 2s 45ms/step - loss: 0.7349 - accuracy: 0.7947 - val_loss: 0.3765 - val_accuracy: 0.9349 Epoch 5/10 33/33 [==============================] - 2s 45ms/step - loss: 0.5373 - accuracy: 0.8481 - val_loss: 0.2888 - val_accuracy: 0.9502 Epoch 6/10 33/33 [==============================] - 2s 45ms/step - loss: 0.4326 - accuracy: 0.8892 - val_loss: 0.2422 - val_accuracy: 0.9617 Epoch 7/10 33/33 [==============================] - 2s 45ms/step - loss: 0.3350 - accuracy: 0.9198 - val_loss: 0.2068 - val_accuracy: 0.9693 Epoch 8/10 33/33 [==============================] - 2s 45ms/step - loss: 0.2821 - accuracy: 0.9398 - val_loss: 0.1713 - val_accuracy: 0.9885 Epoch 9/10 33/33 [==============================] - 2s 45ms/step - loss: 0.2489 - accuracy: 0.9456 - val_loss: 0.1589 - val_accuracy: 0.9847 Epoch 10/10 33/33 [==============================] - 2s 48ms/step - loss: 0.2146 - accuracy: 0.9608 - val_loss: 0.1511 - val_accuracy: 0.9885

2. InceptionV3模型

# 加載預訓練模型 InceptionV3_base_model = tf.keras.applications.inception_v3.InceptionV3(weights='imagenet',include_top=False,input_shape=(img_width, img_height, 3),pooling='max') for layer in InceptionV3_base_model.layers:layer.trainable = FalseX = InceptionV3_base_model.output X = Dropout(0.4)(X)output = Dense(len(class_names), activation='softmax')(X) InceptionV3_model = Model(inputs=InceptionV3_base_model.input, outputs=output)InceptionV3_model.compile(optimizer="adam",loss='sparse_categorical_crossentropy',metrics=['accuracy']) # InceptionV3_model.summary() InceptionV3_history = InceptionV3_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds) Epoch 1/10 33/33 [==============================] - 5s 82ms/step - loss: 3.1642 - accuracy: 0.4040 - val_loss: 0.6005 - val_accuracy: 0.8352 Epoch 2/10 33/33 [==============================] - 1s 34ms/step - loss: 0.7241 - accuracy: 0.8042 - val_loss: 0.2476 - val_accuracy: 0.9234 Epoch 3/10 33/33 [==============================] - 1s 34ms/step - loss: 0.3558 - accuracy: 0.8949 - val_loss: 0.2323 - val_accuracy: 0.9425 Epoch 4/10 33/33 [==============================] - 1s 35ms/step - loss: 0.2435 - accuracy: 0.9226 - val_loss: 0.1599 - val_accuracy: 0.9617 Epoch 5/10 33/33 [==============================] - 1s 34ms/step - loss: 0.1444 - accuracy: 0.9551 - val_loss: 0.1246 - val_accuracy: 0.9617 Epoch 6/10 33/33 [==============================] - 1s 34ms/step - loss: 0.1508 - accuracy: 0.9522 - val_loss: 0.1231 - val_accuracy: 0.9732 Epoch 7/10 33/33 [==============================] - 1s 35ms/step - loss: 0.0793 - accuracy: 0.9761 - val_loss: 0.0853 - val_accuracy: 0.9885 Epoch 8/10 33/33 [==============================] - 1s 35ms/step - loss: 0.0636 - accuracy: 0.9809 - val_loss: 0.1223 - val_accuracy: 0.9732 Epoch 9/10 33/33 [==============================] - 1s 35ms/step - loss: 0.0503 - accuracy: 0.9857 - val_loss: 0.0769 - val_accuracy: 0.9923 Epoch 10/10 33/33 [==============================] - 1s 34ms/step - loss: 0.0346 - accuracy: 0.9904 - val_loss: 0.1066 - val_accuracy: 0.9923

3. DenseNet121算法模型


# 加載預訓練模型 DenseNet121_base_model = tf.keras.applications.densenet.DenseNet121(weights='imagenet',include_top=False,input_shape=(img_width, img_height, 3),pooling='max') for layer in DenseNet121_base_model.layers:layer.trainable = FalseX = DenseNet121_base_model.output X = Dropout(0.4)(X)output = Dense(len(class_names), activation='softmax')(X) DenseNet121_model = Model(inputs=DenseNet121_base_model.input, outputs=output)DenseNet121_model.compile(optimizer="adam",loss='sparse_categorical_crossentropy',metrics=['accuracy']) # model.summary() DenseNet121_history = DenseNet121_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds) Epoch 1/10 33/33 [==============================] - 7s 109ms/step - loss: 4.5573 - accuracy: 0.2932 - val_loss: 1.2358 - val_accuracy: 0.6322 Epoch 2/10 33/33 [==============================] - 1s 43ms/step - loss: 2.0711 - accuracy: 0.5482 - val_loss: 0.4970 - val_accuracy: 0.8391 Epoch 3/10 33/33 [==============================] - 1s 41ms/step - loss: 1.2808 - accuracy: 0.6953 - val_loss: 0.2534 - val_accuracy: 0.9042 Epoch 4/10 33/33 [==============================] - 1s 41ms/step - loss: 0.8280 - accuracy: 0.7736 - val_loss: 0.1845 - val_accuracy: 0.9502 Epoch 5/10 33/33 [==============================] - 1s 41ms/step - loss: 0.5928 - accuracy: 0.8300 - val_loss: 0.1211 - val_accuracy: 0.9770 Epoch 6/10 33/33 [==============================] - 1s 41ms/step - loss: 0.4390 - accuracy: 0.8749 - val_loss: 0.1046 - val_accuracy: 0.9808 Epoch 7/10 33/33 [==============================] - 1s 41ms/step - loss: 0.4108 - accuracy: 0.8797 - val_loss: 0.0950 - val_accuracy: 0.9885 Epoch 8/10 33/33 [==============================] - 1s 41ms/step - loss: 0.3137 - accuracy: 0.9102 - val_loss: 0.0662 - val_accuracy: 0.9808 Epoch 9/10 33/33 [==============================] - 1s 41ms/step - loss: 0.2416 - accuracy: 0.9284 - val_loss: 0.0698 - val_accuracy: 0.9885 Epoch 10/10 33/33 [==============================] - 1s 41ms/step - loss: 0.2524 - accuracy: 0.9217 - val_loss: 0.0597 - val_accuracy: 0.9923

4. MobileNetV2算法模型

# 加載預訓練模型 MobileNetV2_base_model = tf.keras.applications.mobilenet_v2.MobileNetV2(weights='imagenet',include_top=False,input_shape=(img_width, img_height, 3),pooling='max') for layer in MobileNetV2_base_model.layers:layer.trainable = FalseX = MobileNetV2_base_model.output X = Dropout(0.4)(X)output = Dense(len(class_names), activation='softmax')(X) MobileNetV2_model = Model(inputs=MobileNetV2_base_model.input, outputs=output)MobileNetV2_model.compile(optimizer="adam",loss='sparse_categorical_crossentropy',metrics=['accuracy']) #MobileNetV2_model.summary() MobileNetV2_history = MobileNetV2_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds) Epoch 1/10 33/33 [==============================] - 3s 47ms/step - loss: 4.0865 - accuracy: 0.4403 - val_loss: 0.5897 - val_accuracy: 0.8812 Epoch 2/10 33/33 [==============================] - 1s 22ms/step - loss: 1.1042 - accuracy: 0.7536 - val_loss: 0.1841 - val_accuracy: 0.9540 Epoch 3/10 33/33 [==============================] - 1s 22ms/step - loss: 0.6147 - accuracy: 0.8596 - val_loss: 0.1722 - val_accuracy: 0.9770 Epoch 4/10 33/33 [==============================] - 1s 22ms/step - loss: 0.3826 - accuracy: 0.9007 - val_loss: 0.1505 - val_accuracy: 0.9770 Epoch 5/10 33/33 [==============================] - 1s 22ms/step - loss: 0.2290 - accuracy: 0.9370 - val_loss: 0.1408 - val_accuracy: 0.9885 Epoch 6/10 33/33 [==============================] - 1s 22ms/step - loss: 0.1976 - accuracy: 0.9484 - val_loss: 0.1294 - val_accuracy: 0.9923 Epoch 7/10 33/33 [==============================] - 1s 22ms/step - loss: 0.1193 - accuracy: 0.9608 - val_loss: 0.1038 - val_accuracy: 0.9923 Epoch 8/10 33/33 [==============================] - 1s 22ms/step - loss: 0.0859 - accuracy: 0.9675 - val_loss: 0.1140 - val_accuracy: 0.9923 Epoch 9/10 33/33 [==============================] - 1s 22ms/step - loss: 0.0973 - accuracy: 0.9704 - val_loss: 0.1292 - val_accuracy: 0.9923 Epoch 10/10 33/33 [==============================] - 1s 22ms/step - loss: 0.0504 - accuracy: 0.9828 - val_loss: 0.1361 - val_accuracy: 0.9923

三、結果分析

1. 準確率對比分析

# 可在原碼中進行閱讀 plt.show()

2. 損失函數對比分析

# 可在原碼中進行閱讀 plt.show()

3. 混淆矩陣

# 可在原碼中進行閱讀 plot_cm(val_label, val_pre)

4. 評估指標生成

  • support:當前行的類別在測試數據中的樣本總量;
  • precision:被判定為正例(反例)的樣本中,真正的正例樣本(反例樣本)的比例,精度=正確預測的個數(TP)/被預測正確的個數(TP+FP)。
  • recall:被正確分類的正例(反例)樣本,占所有正例(反例)樣本的比例,召回率=正確預測的個數(TP)/預測個數(TP+FN)。
  • f1-score: 精確率和召回率的調和平均值,F1 = 2精度召回率/(精度+召回率)。
  • accuracy:表示準確率,也即正確預測樣本量與總樣本量的比值。
  • macro avg:表示宏平均,表示所有類別對應指標的平均值。
  • weighted avg:表示帶權重平均,表示類別樣本占總樣本的比重與對應指標的乘積的累加和。
from sklearn import metricsdef test_accuracy_report(model):print(metrics.classification_report(val_label, val_pre, target_names=class_names)) score = model.evaluate(val_ds, verbose=0)print('Loss function: %s, accuracy:' % score[0], score[1]) test_accuracy_report(InceptionV3_model) precision recall f1-score support15 1.00 1.00 1.00 216 1.00 1.00 1.00 2817 1.00 1.00 1.00 2520 1.00 0.33 0.50 322 1.00 1.00 1.00 423 1.00 1.00 1.00 124 1.00 1.00 1.00 1626 1.00 1.00 1.00 3227 0.71 1.00 0.83 528 1.00 1.00 1.00 9029 1.00 1.00 1.00 530 1.00 1.00 1.00 3331 1.00 1.00 1.00 832 1.00 1.00 1.00 9accuracy 0.99 261macro avg 0.98 0.95 0.95 261 weighted avg 0.99 0.99 0.99 261Loss function: 0.10659126937389374, accuracy: 0.992337167263031

四、指定圖片進行預測

from PIL import Imageimg = Image.open("./1-data/17/017_0001.png") image = tf.image.resize(img, [img_height, img_width])img_array = tf.expand_dims(image, 0) predictions = InceptionV3_model.predict(img_array) print("預測結果為:",np.argmax(predictions)) 預測結果為: 11

掃我,獲取源碼

《新程序員》:云原生和全面數字化實踐50位技術專家共同創作,文字、視頻、音頻交互閱讀

總結

以上是生活随笔為你收集整理的基于深度学习的交通标识别算法对比研究-TensorFlow2实现的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。