日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

Pytorch学习-Task1

發(fā)布時(shí)間:2025/4/5 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Pytorch学习-Task1 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

PyTorch學(xué)習(xí)-Task1:PyTorch張量計(jì)算與Numpy的轉(zhuǎn)換

  • 張量 Tensor
      • 1.張量的定義
      • 2.張量的運(yùn)算
      • 3.Tensor與Numpy類型的轉(zhuǎn)換
      • 4.自動(dòng)求導(dǎo)
  • pytorch 與神經(jīng)網(wǎng)絡(luò)
    • 實(shí)戰(zhàn):訓(xùn)練一個(gè)圖片分類器
      • 深度學(xué)習(xí)中的優(yōu)化方法 - momentum、Nesterov Momentum、AdaGrad、Adadelta、RMSprop、Adam

張量 Tensor

1.張量的定義

PyTorch支持的數(shù)據(jù)類型:整型和浮點(diǎn)型;以及對(duì)應(yīng)的張量子類型

數(shù)據(jù)類型類型名稱張量子類型
(1) 整型
8位int8ByteTensor
8位uint8CharTensor
16位shortShortTensor
32位intIntTensor
64位longLongTensor
(2) 浮點(diǎn)型
16位half / float16HalfTensor
32位float / float32FloatTensor
64位double / float64DoubleTensor
import torch x = torch.Tensor(5,3) #創(chuàng)建一個(gè)形狀為5*3的張量,未被初始化 # 創(chuàng)建時(shí)可以指定數(shù)據(jù)類型 t = torch.Tensor(1,dtype=torch.float32) t = torch.FloatTensor(1) # 其他 x = torch.rand(5,3) #創(chuàng)建一個(gè)隨機(jī)張量,形狀為5*3 y = torch.rand(5,3)

2.張量的運(yùn)算

#兩個(gè)形狀相同的張量進(jìn)行相加 # 方式一 result = torch.tensor(5,3) result = x + y # 方式二 torch.add(x,y,out=result) # y與x相加 y.add_(x)# 此外,pytorch還提供了一些數(shù)學(xué)函數(shù) # 計(jì)算余弦值 a20 = torch.tensor(0.5) a21 = torch.cos(a20)

3.Tensor與Numpy類型的轉(zhuǎn)換

# tensor與numpy之間的轉(zhuǎn)換 a = torch.ones(5) b = a.numpy()# 當(dāng)修改numpy時(shí),與之相關(guān)聯(lián)的tensor也會(huì)相應(yīng)改變 a.add_(1) print(a) print(b)# 將numpy的Array轉(zhuǎn)換成tensor import numpy as np a = np.ones(5) b = torch.from_numpy(a) np.add(a,1,out=a) print(a) print(b)# 除了CharTensor之外,其他的tensor都可以在GPU和CPU之間隨意轉(zhuǎn)換 if torch.cuda.is_available():x = x.cuda()y = y.cuda()x + y

4.自動(dòng)求導(dǎo)

from torch.autograd import Variable x = Variable(torch.ones(2,2),requires_grad=True) print(x) y = x+2 print(y) z = y * y * 3 print(z) out = z.mean() out.backward() # 此處輸出 d(out)/dx ''' 計(jì)算公式 o = 1/4求和zi zi = 3(xi+2)^2 zi(xi=1) = 3 * 3^2 = 27 因此,o對(duì)于xi求導(dǎo)等于3/2(xi+2) o對(duì)xi求導(dǎo),當(dāng)xi=1時(shí) 9/2 = 4.5 ''' x.grad x = torch.randn(3) x = Variable(x,requires_grad=True) y = x*2 # y.data.norm() 其實(shí)就是對(duì)y張量L2范數(shù),先對(duì)y中每一項(xiàng)取平方,之后累加,最后取根號(hào) print(torch.sqrt(torch.sum(torch.pow(y,2)))) i = 0 while y.data.norm() < 1000:y = y*2i += 1 print("i= ",i) print("y= ",y) gradients = torch.FloatTensor([0.1,1.0,0.0001]) print("gradients= ",gradients) y.backward(gradients) x.grad

pytorch 與神經(jīng)網(wǎng)絡(luò)

一個(gè)典型的神經(jīng)網(wǎng)絡(luò)的訓(xùn)練過程
定義一個(gè)有學(xué)習(xí)參數(shù)的神經(jīng)網(wǎng)絡(luò)
對(duì)一個(gè)輸入的數(shù)據(jù)集進(jìn)行迭代:
對(duì)輸入進(jìn)行處理
計(jì)算代價(jià)值
將梯度傳播回神經(jīng)網(wǎng)絡(luò)的參數(shù)
更新網(wǎng)絡(luò)中的權(quán)重:weight = weight + learning_rate * gradient

# 導(dǎo)包 import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F # 定義一個(gè)神經(jīng)網(wǎng)絡(luò) class Net(nn.Module):def __init__(self):super(Net, self).__init__()self.conv1 = nn.Conv2d(1, 6, 5) # 1 input image channel, 6 output channels, 5x5 square convolution kernelself.conv2 = nn.Conv2d(6, 16, 5)self.fc1 = nn.Linear(16*5*5, 120) # an affine operation: y = Wx + bself.fc2 = nn.Linear(120, 84)self.fc3 = nn.Linear(84, 10)def forward(self, x):x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # Max pooling over a (2, 2) windowx = F.max_pool2d(F.relu(self.conv2(x)), 2) # If the size is a square you can only specify a single numberx = x.view(-1, self.num_flat_features(x))x = F.relu(self.fc1(x))x = F.relu(self.fc2(x))x = self.fc3(x)return xdef num_flat_features(self, x):size = x.size()[1:] # all dimensions except the batch dimensionnum_features = 1for s in size:num_features *= sreturn num_featuresnet = Net() # 查看模型中的參數(shù) net.parameters() params = list(net.parameters()) print(len(params)) print(params[0].size())input = Variable(torch.randn(1,1,32,32)) print(input) out = net(input) out ''' 注意torch.nn只接受小批量的數(shù)據(jù),而非單個(gè)樣本。 例如nn.Conv2d能接受一個(gè)四維tensor Sample * nChannels * height * width = 如果你拿的單個(gè)樣本,使用input.unsqueeze(0)加一個(gè)假維度即可。 ''' net.zero_grad() # 對(duì)所有的參數(shù)梯度緩沖區(qū)進(jìn)行歸零 out.backward(torch.randn(1,10)) ''' 代價(jià)函數(shù) nn.Package包中含有一些代價(jià)函數(shù) nn.MSELoss計(jì)算輸入和目標(biāo)之間的均方誤差 ''' print("input= ",input) output = net(input) print("output= ",output) target = Variable(torch.range(1,10)) print("target= ",target) criterion = nn.MSELoss() # 標(biāo)準(zhǔn) loss = criterion(output,target) print("loss= ",loss)''' 更新網(wǎng)絡(luò)的權(quán)重 常采用隨機(jī)梯度下降法SGD weight = weight - learning_rate * gradientlearning_rate = 0.01 for f in net.parameters():f.data.sub_(f.grad.data * learning_rate)其它方法如:SGD,Nesterov-SGD,Adam,RMSProp等都在torch.optim這個(gè)包里 使用案例 ''' import torch.optim as optim optimizer = optim.SGD(net.parameters(),lr=0.01) # create your optimizeroptimizer.zero_grad() #首先清零 output = net(input) loss = criterion(output,target) loss.backward() optimizer.step()

實(shí)戰(zhàn):訓(xùn)練一個(gè)圖片分類器

''' 目標(biāo):訓(xùn)練一個(gè)圖片分類器 步驟: 1)使用torchvision讀取并預(yù)處理CIFAR10數(shù)據(jù)集 2)定義一個(gè)卷積神經(jīng)網(wǎng)絡(luò) 3)定義一個(gè)代價(jià)函數(shù) 4)在神經(jīng)網(wǎng)絡(luò)中訓(xùn)練 訓(xùn)練數(shù)據(jù)集 5)使用測(cè)試集數(shù)據(jù)測(cè)試神經(jīng)網(wǎng)絡(luò) '''

step1 : 導(dǎo)入數(shù)據(jù) 如果下載一直出現(xiàn)問題,那么就先下載下來,放在data文件夾中,就不需要再下載了

import torch import torchvision import torchvision.transforms as transforms import datasets# torchvision數(shù)據(jù)集的輸出是在[0,1]范圍內(nèi)的PILImage圖片 # 使用歸一化方法將其轉(zhuǎn)換為tensor,數(shù)據(jù)范圍[-1,1]transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))]) trainset = torchvision.datasets.CIFAR10('./data',train=True,download=True,transform=transform) trainloader = torch.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=2)testset = torchvision.datasets.CIFAR10('./data',train=False,download=True,transform=transform) testloader = torch.utils.data.DataLoader(testset,batch_size = 4,shuffle=False,num_workers=2)classes = ('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')

step2 : 隨便找?guī)讖垐D片show一下

# 從中隨便找?guī)讖垐D片 function is show image import matplotlib.pyplot as plt import numpy as np%matplotlib inline def imshow(img):img = img/2 + 0.5npimg = img.numpy()plt.imshow(np.transpose(npimg,(1,2,0)))dataiter = iter(trainloader) images,labels = dataiter.next()imshow(torchvision.utils.make_grid(images))print(" ".join('%5s'%classes[labels[j]] for j in range(4)))


step3 : 定義一個(gè)卷積神經(jīng)網(wǎng)絡(luò)
必須重寫forward函數(shù),不需要寫backward函數(shù),會(huì)自動(dòng)生成
通過forward函數(shù)將各個(gè)層連接起來。

import torch.nn as nn class Net(nn.Module):def __init__(self):super(Net,self).__init__()self.conv1 = nn.Conv2d(3, 6, 5)self.pool = nn.MaxPool2d(2,2)self.conv2 = nn.Conv2d(6, 16, 5)self.fc1 = nn.Linear(16*5*5, 120)self.fc2 = nn.Linear(120, 84)self.fc3 = nn.Linear(84, 10)def forward(self,x):x = self.pool(F.relu(self.conv1(x)))x = self.pool(F.relu(self.conv2(x)))x = x.view(-1, 16*5*5)x = F.relu(self.fc1(x))x = F.relu(self.fc2(x))x = self.fc3(x)return x net = Net() net


step4 : 定義代價(jià)函數(shù)和優(yōu)化器

import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(),lr=0.001,momentum=0.9) # momentum動(dòng)量

step5 : 訓(xùn)練網(wǎng)絡(luò)

# 訓(xùn)練網(wǎng)絡(luò) import torch.nn.functional as F from torch.autograd import Variable ''' 訓(xùn)練兩次running_loss 記錄每次的loss對(duì)于訓(xùn)練集trainloader中的每一個(gè)數(shù)據(jù)data1)獲取輸入inputs,labels 并將其變?yōu)樽兞縑ariable2)將網(wǎng)絡(luò)中參數(shù)梯度清零 optimizer.zero_grad()3)獲取輸出outputs = net(inputs)4)計(jì)算損失loss 傳入預(yù)測(cè)y值outputs 和真實(shí)標(biāo)簽labels5)loss.backwrad() loss反向傳播6) optimizer.step()更新所有參數(shù)的梯度 ''' for epoch in range(2):running_loss = 0.0for i,data in enumerate(trainloader,0):# get inputsinputs, labels = datainputs,labels = Variable(inputs),Variable(labels)# zero the parameter gradientsoptimizer.zero_grad()# forward + backward + optimizeoutputs = net(inputs)loss = criterion(outputs,labels)loss.backward()optimizer.step() # print statisticsrunning_loss += loss.item()if i %2000 == 1999:print('[%d,%5d] loss: %.3f'%(epoch+1,i+1, running_loss / 2000))running_loss = 0.0 print('Finished Training')# 看一下在整個(gè)網(wǎng)絡(luò)上的表現(xiàn) class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) for data in testloader:images, labels = dataoutputs = net(Variable(images))_, predicted = torch.max(outputs.data, 1)c = (predicted == labels).squeeze()for i in range(4): 、label = labels[i]class_correct[label] += c[i]class_total[label] += 1for i in range(10):print('Accuracy of %5s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i]))

深度學(xué)習(xí)中的優(yōu)化方法 - momentum、Nesterov Momentum、AdaGrad、Adadelta、RMSprop、Adam

總結(jié)

以上是生活随笔為你收集整理的Pytorch学习-Task1的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。