日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

运用百度框架paddle进行手势识别【动手实践,附源码】

發布時間:2024/3/13 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 运用百度框架paddle进行手势识别【动手实践,附源码】 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

運用百度框架paddle進行手勢識別

本文為一篇實戰心得
項目來源于百度aistudio平臺,感興趣可以登錄,地址鏈接:
https://aistudio.baidu.com/

1. 深度學習的四個步驟

  • 數據標簽處理
  • 構建網絡模型
  • 規劃網絡超參
  • 訓練評估模型
  • 2. 代碼解析

  • 導入庫文件
  • import os import time import random import numpy as np from PIL import Image import matplotlib.pyplot as plt import paddle import paddle.fluid as fluid import paddle.fluid.layers as layers from multiprocessing import cpu_count from paddle.fluid.dygraph import Pool2D,Conv2D from paddle.fluid.dygraph import Linear
  • 數據標簽處理
    paddle為大家準備的數據集是0-9的手勢,每個手勢有200+張彩色圖片,分辨率為100x100
  • # 生成圖像列表 data_path = 'Dataset'#這里是你的數據集路徑 character_folders = os.listdir(data_path) # print(character_folders) if(os.path.exists('./train_data.list')):os.remove('./train_data.list') if(os.path.exists('./test_data.list')):os.remove('./test_data.list')for character_folder in character_folders:with open('./train_data.list', 'a') as f_train:with open('./test_data.list', 'a') as f_test:if character_folder == '.DS_Store':continuecharacter_imgs = os.listdir(os.path.join(data_path,character_folder))count = 0 for img in character_imgs:if img =='.DS_Store':continueif count%10 == 0:f_test.write(os.path.join(data_path,character_folder,img) + '\t' + character_folder + '\n')else:f_train.write(os.path.join(data_path,character_folder,img) + '\t' + character_folder + '\n')count +=1 print('列表已生成')

    使用paddle的reader模塊制作訓練集和測試集

    # 定義訓練集和測試集的reader def data_mapper(sample):img, label = sampleimg = Image.open(img)img = img.resize((100, 100), Image.ANTIALIAS)img = np.array(img).astype('float32')img = img.transpose((2, 0, 1))img = img/255.0return img, labeldef data_reader(data_list_path):def reader():with open(data_list_path, 'r') as f:lines = f.readlines()for line in lines:img, label = line.split('\t')yield img, int(label)return paddle.reader.xmap_readers(data_mapper, reader, cpu_count(), 512)# 用于訓練的數據提供器 #buf_size是打亂數據集的參數,size越大,圖片順序越亂 train_reader = paddle.batch(reader=paddle.reader.shuffle(reader=data_reader('./train_data.list'), buf_size=1024), batch_size=32) # 用于測試的數據提供器 test_reader = paddle.batch(reader=data_reader('./test_data.list'), batch_size=32)
  • 構建神經網絡
  • 這里以典型的AlexNet構建神經網絡結構

    #定義DNN網絡 class MyDNN(fluid.dygraph.Layer):def __init__(self, name_scope, num_classes=10):super(MyDNN, self).__init__(name_scope)name_scope = self.full_name()self.conv1 = Conv2D(num_channels=3, num_filters=96, filter_size=11, stride=4, padding=5, act='relu')self.pool1 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')self.conv2 = Conv2D(num_channels=96, num_filters=256, filter_size=5, stride=1, padding=2, act='relu')self.pool2 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')self.conv3 = Conv2D(num_channels=256, num_filters=384, filter_size=3, stride=1, padding=1, act='relu')self.conv4 = Conv2D(num_channels=384, num_filters=384, filter_size=3, stride=1, padding=1, act='relu')self.conv5 = Conv2D(num_channels=384, num_filters=256, filter_size=3, stride=1, padding=1, act='relu')self.pool5 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')self.fc1 = Linear(input_dim=9216, output_dim=4096, act='relu')self.drop_ratio1 = 0.5self.fc2 = Linear(input_dim=4096, output_dim=4096, act='relu')self.drop_ratio2 = 0.5self.fc3 = Linear(input_dim=4096, output_dim=num_classes)def forward(self, x):x = self.conv1(x)x = self.pool1(x)x = self.conv2(x)x = self.pool2(x)x = self.conv3(x)x = self.conv4(x)x = self.conv5(x)x = self.pool5(x)x = fluid.layers.reshape(x, [x.shape[0], -1])x = self.fc1(x)# 在全連接之后使用dropout抑制過擬合x= fluid.layers.dropout(x, self.drop_ratio1)x = self.fc2(x)# 在全連接之后使用dropout抑制過擬合x = fluid.layers.dropout(x, self.drop_ratio2)x = self.fc3(x)return x
  • 規劃網絡超參
  • #用動態圖進行訓練with fluid.dygraph.guard():model=MyDNN('Alexnet') #模型實例化model.train() #訓練模式opt = fluid.optimizer.Momentum(learning_rate=0.001,momentum=0.9,parameter_list=model.parameters())epochs_num=50 #迭代次數for pass_num in range(epochs_num):for batch_id,data in enumerate(train_reader()):images=np.array([x[0].reshape(3,100,100) for x in data],np.float32)labels = np.array([x[1] for x in data]).astype('int64')labels = labels[:, np.newaxis]image=fluid.dygraph.to_variable(images)label=fluid.dygraph.to_variable(labels)predict=model(image)#預測loss=fluid.layers.softmax_with_cross_entropy(predict,label)avg_loss=fluid.layers.mean(loss)#獲取loss值acc=fluid.layers.accuracy(predict,label)#計算精度if batch_id!=0 and batch_id%50==0:print("train_pass:{},batch_id:{},train_loss:{},train_acc:{}".format(pass_num,batch_id,avg_loss.numpy(),acc.numpy()))avg_loss.backward()opt.minimize(avg_loss)model.clear_gradients()fluid.save_dygraph(model.state_dict(),'MyDNN')#保存模型
  • 評估模型
  • with fluid.dygraph.guard():accs = []model_dict, _ = fluid.load_dygraph('MyDNN')model = MyDNN('Alexnet')model.load_dict(model_dict) #加載模型參數model.eval() #訓練模式for batch_id,data in enumerate(test_reader()):#測試集images=np.array([x[0].reshape(3,100,100) for x in data],np.float32)labels = np.array([x[1] for x in data]).astype('int64')labels = labels[:, np.newaxis]image=fluid.dygraph.to_variable(images)label=fluid.dygraph.to_variable(labels)predict=model(image) acc=fluid.layers.accuracy(predict,label)accs.append(acc.numpy()[0])avg_acc = np.mean(accs)print(avg_acc)

    3.實戰心得

    在寫代碼的過程中,學到了很多知識,本次實戰有專門的課程輔導,更重要感覺是有個微信群交流,在群里都是大佬,能學到很多東西,大家都很優秀,其中在實戰過程中也遇到了很多問題,助教幫忙解決了。
    說說自己的問題吧,可能是自己的基礎太差,是小白的原因,對于官方給出的API文檔看起來很費勁,哈哈哈,自己還得加油,也希望官方能優化文檔,是的小白也能快速入手~~~

    over!!!

    總結

    以上是生活随笔為你收集整理的运用百度框架paddle进行手势识别【动手实践,附源码】的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。