日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

基于pytorch的BP神经网络模型构建

發布時間:2024/8/1 编程问答 43 豆豆
生活随笔 收集整理的這篇文章主要介紹了 基于pytorch的BP神经网络模型构建 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

小伙伴好,最近想要認真學習一波pytorch,打算通過pytorch去構建一系列的網絡模型,包括CNN、LSTM、Transform等,后續都會進行搭建。一個不斷學習的小菜雞,也希望有問題小伙伴能指出。

MINST數據集是手寫數字的識別,圖片尺寸為(28,28,1),標簽為數字的類別。


?

1、加載數據

本文這次主要是對MNIST數據集進行測試。利用pytorch加載數據的方法如下:

# 利用datasets中可加載不同的數據集,本次選用MNIST數據集 from torchvision import datasets, transformsbatch_size = 128# 加載數據 def load_data():# 下載數據train_data = datasets.MNIST(root='./data/',train=True,transform=transforms.ToTensor(),download=True)test_data = datasets.MNIST(root='./data/',train=False,transform=transforms.ToTensor())# 返回一個數據迭代器# shuffle:是否打亂順序train_loader = torch.utils.data.DataLoader(dataset=train_data,batch_size=batch_size,shuffle=True)test_loader = torch.utils.data.DataLoader(dataset=test_data,batch_size=batch_size,shuffle=False)print("Loaded Data successfully!")return train_loader, test_loader

獲取數據的命令:

datasets.MNIST(root,train=True,transform=None,target_transform=None,download=False)

其中:

  • root:數據集的根目錄,如果已存在則直接加載,不存在則可設置download下載
  • train:如果為True,則從training.pt創建數據集,否則從testing.pt創建數據集
  • download: 如果為True,從互聯網下載數據集,然后將其放在根目錄中。 如果數據集已經存在則不會重復下載。
  • transform:接受PIL圖像并返回轉換后版本的函數/轉換。 例如transforms.ToTensor()(就是對圖像數據進行處理,將類型轉變為Tensor傳入dataloader)
  • target_transform:接受目標并對其進行轉換的函數/轉換。與上述相似。
  • 數據分批加載:

    torch.utils.data.DataLoader(dataset=train_data,batch_size=Config.batch_size,shuffle=True)?

    主要實現對數據的加載,批次的數據量設置,是否打亂設置。一般訓練集打亂,測試集不打亂。

    如果是對自己的數據集處理,則dataset=自己的數據集,不需要使用上述加載MNIST數據集的流程。?

    2、構建模型

    這次主要使用BP神經網絡構建,后續還會構建一些常見的CNN網絡,如:LeNet、AlexNet、ResNet等進行構建。

    本次構建主要包含輸入層維度為28*28,輸出層維度為10,隱藏層128。本次主要是對工具的使用進行介紹,小伙伴可以根據自己的需要構建網絡,提高模型準確率等。

    import torch import torch.nn as nnclass BP(nn.Module):def __init__(self):super(BP, self).__init__()# 構建隱藏層self.fc1 = nn.Sequential(nn.Linear(28*28, 128),nn.ReLU())# 構建輸出層self.fc2 = nn.Sequential(nn.Linear(128, 10),nn.Sigmoid())def forward(self, x):# 將照片鋪平為1維向量x = x.view(-1,784)x = self.fc1(x)x = self.fc2(x)return xdata = torch.randn(1,28,28) net = BP() outputs = net(data) print(outputs.shape)

    構建的模型輸出維度為(1,10),10代表標簽的類別數目

    3、訓練模型

    選擇合適的優化器,損失函數幫助我們的模型快速收斂,一般優化器選擇Adam,SGD等效果較好,損失函數一般回歸問題選擇MSE,分類問題選擇交叉熵損失。

    # 選擇損失為交叉熵損失 criterion = nn.CrossEntropyLoss() # 選擇優化器為Adam optimizer = torch.optim.Adam()

    對于一些參數的選擇可以自行搜索

    def train_step(self):print("Training & Evaluating based on BP......")file = './result/raw_train_mnist.txt'fp = open(file,'w',encoding='utf-8')fp.write('epoch\tbatch\tloss\taccuracy\n')# 循環輪次for epoch in range(Config.epoch):# 顯示當前輪print("Epoch {:3}.".format(epoch + 1))# 循環批次,每一批的數目當前設置128for batch_idx,(data, label) in enumerate(self.train):# 這里設置主要是我利用GPU訓練,用cpu訓練的可以忽略data, label = Variable(data.cuda()), Variable(label.cuda())# 優化器初始化self.optimizer.zero_grad()outputs = self.net(data)# 計算損失loss = self.criterion(outputs, label)loss.backward()# 進行優化計算self.optimizer.step()# 每100次打印一次結果if batch_idx % Config.print_per_step == 0:# 每100批次計算當前準確率_, predicted = torch.max(outputs, 1)correct = 0for _ in predicted == label:if _:correct += 1accuracy = correct / Config.batch_sizemsg = "Batch: {:5}, Loss: {:6.2f}, Accuracy: {:8.2%}."# 輸出準確率print(msg.format(batch_idx, loss, accuracy))fp.write('{}\t{}\t{}\t{}\n'.format(epoch,batch_idx,loss,accuracy))fp.close()test_loss = 0.test_correct = 0for data, label in self.test:data, label = Variable(data.cuda()), Variable(label.cuda())outputs = self.net(data)loss = self.criterion(outputs, label)test_loss += loss * Config.batch_size_, predicted = torch.max(outputs, 1)correct = int(sum(predicted == label))test_correct += correctaccuracy = test_correct / len(self.test.dataset)loss = test_loss / len(self.test.dataset)print("Test Loss: {:5.2f}, Accuracy: {:6.2%}".format(loss, accuracy))torch.save(self.net.state_dict(),'./result/raw_train_mnist_model.pth')

    4、整體代碼以及模型訓練

    import torch from torchvision import datasets, transforms import torch.nn as nn import torch.optim as optim from torch.autograd import Variable import numpy as npdevice = torch.device('cuda:0')class Config:batch_size = 128epoch = 10alpha = 1e-3print_per_step = 100 # 控制輸出class BP(nn.Module):def __init__(self):super(BP, self).__init__()self.fc1 = nn.Sequential(nn.Linear(28*28, 128),nn.ReLU())self.fc2 = nn.Sequential(nn.Linear(128, 10),nn.Sigmoid())def forward(self, x):x = x.view(-1,784)x = self.fc1(x)x = self.fc2(x)return xclass TrainProcess:def __init__(self):self.train, self.test = self.load_data()self.net = BP().to(device)self.criterion = nn.CrossEntropyLoss() # 定義損失函數self.optimizer = optim.Adam(self.net.parameters(), lr=Config.alpha)@staticmethoddef load_data():train_data = datasets.MNIST(root='./data/',train=True,transform=transforms.ToTensor(),download=True)test_data = datasets.MNIST(root='./data/',train=False,transform=transforms.ToTensor())# 返回一個數據迭代器# shuffle:是否打亂順序train_loader = torch.utils.data.DataLoader(dataset=train_data,batch_size=Config.batch_size,shuffle=True)test_loader = torch.utils.data.DataLoader(dataset=test_data,batch_size=Config.batch_size,shuffle=False)return train_loader, test_loaderdef train_step(self):print("Training & Evaluating based on BP......")file = './result/raw_train_mnist.txt'fp = open(file,'w',encoding='utf-8')fp.write('epoch\tbatch\tloss\taccuracy\n')for epoch in range(Config.epoch):print("Epoch {:3}.".format(epoch + 1))for batch_idx,(data, label) in enumerate(self.train):data, label = Variable(data.cuda()), Variable(label.cuda())self.optimizer.zero_grad()outputs = self.net(data)loss = self.criterion(outputs, label)loss.backward()self.optimizer.step()# 每100次打印一次結果if batch_idx % Config.print_per_step == 0:_, predicted = torch.max(outputs, 1)correct = 0for _ in predicted == label:if _:correct += 1accuracy = correct / Config.batch_sizemsg = "Batch: {:5}, Loss: {:6.2f}, Accuracy: {:8.2%}."print(msg.format(batch_idx, loss, accuracy))fp.write('{}\t{}\t{}\t{}\n'.format(epoch,batch_idx,loss,accuracy))fp.close()test_loss = 0.test_correct = 0for data, label in self.test:data, label = Variable(data.cuda()), Variable(label.cuda())outputs = self.net(data)loss = self.criterion(outputs, label)test_loss += loss * Config.batch_size_, predicted = torch.max(outputs, 1)correct = 0for _ in predicted == label:if _:correct += 1test_correct += correctaccuracy = test_correct / len(self.test.dataset)loss = test_loss / len(self.test.dataset)print("Test Loss: {:5.2f}, Accuracy: {:6.2%}".format(loss, accuracy))torch.save(self.net.state_dict(),'./result/raw_train_mnist_model.pth')if __name__ == "__main__":p = TrainProcess()p.train_step()

    訓練結果:

    Training & Evaluating based on BP...... Epoch 1. Batch: 0, Loss: 2.31, Accuracy: 10.16%. Batch: 100, Loss: 1.68, Accuracy: 83.59%. Batch: 200, Loss: 1.60, Accuracy: 89.84%. Batch: 300, Loss: 1.60, Accuracy: 85.94%. Batch: 400, Loss: 1.55, Accuracy: 91.41%. Epoch 2. Batch: 0, Loss: 1.54, Accuracy: 91.41%. Batch: 100, Loss: 1.56, Accuracy: 89.84%. Batch: 200, Loss: 1.53, Accuracy: 91.41%. Batch: 300, Loss: 1.56, Accuracy: 91.41%. Batch: 400, Loss: 1.51, Accuracy: 96.09%. Epoch 3. Batch: 0, Loss: 1.50, Accuracy: 97.66%. Batch: 100, Loss: 1.54, Accuracy: 92.19%. Batch: 200, Loss: 1.52, Accuracy: 93.75%. Batch: 300, Loss: 1.51, Accuracy: 95.31%. Batch: 400, Loss: 1.53, Accuracy: 93.75%. Epoch 4. Batch: 0, Loss: 1.51, Accuracy: 94.53%. Batch: 100, Loss: 1.50, Accuracy: 94.53%. Batch: 200, Loss: 1.52, Accuracy: 95.31%. Batch: 300, Loss: 1.53, Accuracy: 93.75%. Batch: 400, Loss: 1.50, Accuracy: 96.88%. Epoch 5. Batch: 0, Loss: 1.49, Accuracy: 96.88%. Batch: 100, Loss: 1.50, Accuracy: 96.09%. Batch: 200, Loss: 1.50, Accuracy: 97.66%. Batch: 300, Loss: 1.50, Accuracy: 93.75%. Batch: 400, Loss: 1.50, Accuracy: 95.31%. Epoch 6. Batch: 0, Loss: 1.51, Accuracy: 96.88%. Batch: 100, Loss: 1.51, Accuracy: 94.53%. Batch: 200, Loss: 1.54, Accuracy: 92.97%. Batch: 300, Loss: 1.48, Accuracy: 97.66%. Batch: 400, Loss: 1.51, Accuracy: 96.09%. Epoch 7. Batch: 0, Loss: 1.50, Accuracy: 96.88%. Batch: 100, Loss: 1.51, Accuracy: 95.31%. Batch: 200, Loss: 1.49, Accuracy: 96.88%. Batch: 300, Loss: 1.50, Accuracy: 94.53%. Batch: 400, Loss: 1.49, Accuracy: 94.53%. Epoch 8. Batch: 0, Loss: 1.49, Accuracy: 96.88%. Batch: 100, Loss: 1.49, Accuracy: 97.66%. Batch: 200, Loss: 1.49, Accuracy: 96.88%. Batch: 300, Loss: 1.51, Accuracy: 96.09%. Batch: 400, Loss: 1.51, Accuracy: 96.88%. Epoch 9. Batch: 0, Loss: 1.49, Accuracy: 96.88%. Batch: 100, Loss: 1.50, Accuracy: 95.31%. Batch: 200, Loss: 1.49, Accuracy: 96.09%. Batch: 300, Loss: 1.49, Accuracy: 96.09%. Batch: 400, Loss: 1.49, Accuracy: 98.44%. Epoch 10. Batch: 0, Loss: 1.48, Accuracy: 97.66%. Batch: 100, Loss: 1.48, Accuracy: 98.44%. Batch: 200, Loss: 1.50, Accuracy: 95.31%. Batch: 300, Loss: 1.49, Accuracy: 97.66%. Batch: 400, Loss: 1.49, Accuracy: 97.66%. Test Loss: 1.51, Accuracy: 97.02%

    這里訓練了10的epoch就可以達到97%,而且是非常簡單的神經網絡。希望可以幫助大家可以快速上手pytorch。

    總結

    以上是生活随笔為你收集整理的基于pytorch的BP神经网络模型构建的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。