日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

pytorch指定用多张显卡训练_Pytorch多GPU训练

發(fā)布時(shí)間:2024/10/14 编程问答 52 豆豆
生活随笔 收集整理的這篇文章主要介紹了 pytorch指定用多张显卡训练_Pytorch多GPU训练 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

Pytorch多GPU訓(xùn)練

臨近放假, 服務(wù)器上的GPU好多空閑, 博主順便研究了一下如何用多卡同時(shí)訓(xùn)練

原理

多卡訓(xùn)練的基本過(guò)程

首先把模型加載到一個(gè)主設(shè)備

把模型只讀復(fù)制到多個(gè)設(shè)備

把大的batch數(shù)據(jù)也等分到不同的設(shè)備

最后將所有設(shè)備計(jì)算得到的梯度合并更新主設(shè)備上的模型參數(shù)

代碼實(shí)現(xiàn)(以Minist為例)

#!/usr/bin/python3

# coding: utf-8

import torch

from torchvision import datasets, transforms

import torchvision

from tqdm import tqdm

device_ids = [3, 4, 6, 7]

BATCH_SIZE = 64

transform = transforms.Compose([transforms.ToTensor(),

transforms.Normalize(mean=[0.5,0.5,0.5],std=[0.5,0.5,0.5])])

data_train = datasets.MNIST(root = "./data/",

transform=transform,

train = True,

download = True)

data_test = datasets.MNIST(root="./data/",

transform = transform,

train = False)

data_loader_train = torch.utils.data.DataLoader(dataset=data_train,

# 這里注意batch size要對(duì)應(yīng)放大倍數(shù)

batch_size = BATCH_SIZE * len(device_ids),

shuffle = True,

num_workers=2)

data_loader_test = torch.utils.data.DataLoader(dataset=data_test,

batch_size = BATCH_SIZE * len(device_ids),

shuffle = True,

num_workers=2)

class Model(torch.nn.Module):

def __init__(self):

super(Model, self).__init__()

self.conv1 = torch.nn.Sequential(

torch.nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1),

torch.nn.ReLU(),

torch.nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),

torch.nn.ReLU(),

torch.nn.MaxPool2d(stride=2, kernel_size=2),

)

self.dense = torch.nn.Sequential(

torch.nn.Linear(14 * 14 * 128, 1024),

torch.nn.ReLU(),

torch.nn.Dropout(p=0.5),

torch.nn.Linear(1024, 10)

)

def forward(self, x):

x = self.conv1(x)

x = x.view(-1, 14 * 14 * 128)

x = self.dense(x)

return x

model = Model()

model = torch.nn.DataParallel(model, device_ids=device_ids) # 聲明所有可用設(shè)備

model = model.cuda(device=device_ids[0]) # 模型放在主設(shè)備

cost = torch.nn.CrossEntropyLoss()

optimizer = torch.optim.Adam(model.parameters())

n_epochs = 50

for epoch in range(n_epochs):

running_loss = 0.0

running_correct = 0

print("Epoch {}/{}".format(epoch, n_epochs))

print("-"*10)

for data in tqdm(data_loader_train):

X_train, y_train = data

# 注意數(shù)據(jù)也是放在主設(shè)備

X_train, y_train = X_train.cuda(device=device_ids[0]), y_train.cuda(device=device_ids[0])

outputs = model(X_train)

_,pred = torch.max(outputs.data, 1)

optimizer.zero_grad()

loss = cost(outputs, y_train)

loss.backward()

optimizer.step()

running_loss += loss.data.item()

running_correct += torch.sum(pred == y_train.data)

testing_correct = 0

for data in data_loader_test:

X_test, y_test = data

X_test, y_test = X_test.cuda(device=device_ids[0]), y_test.cuda(device=device_ids[0])

outputs = model(X_test)

_, pred = torch.max(outputs.data, 1)

testing_correct += torch.sum(pred == y_test.data)

print("Loss is:{:.4f}, Train Accuracy is:{:.4f}%, Test Accuracy is:{:.4f}".format(running_loss/len(data_train),

100*running_correct/len(data_train),

100*testing_correct/len(data_test)))

torch.save(model.state_dict(), "model_parameter.pkl")

結(jié)果分析

可以通過(guò)nvidia-smi清楚地看到3, 4, 6, 7卡在計(jì)算/usr/bin/python3進(jìn)程(進(jìn)程號(hào)都為34930)

從實(shí)際加速效果來(lái)看, 由于minist是小數(shù)據(jù)集, 可能調(diào)度帶來(lái)的overhead反而比計(jì)算的開(kāi)銷大, 因此加速不明顯. 但是到大數(shù)據(jù)集上訓(xùn)練時(shí), 多卡的優(yōu)勢(shì)就會(huì)體現(xiàn)出來(lái)了

總結(jié)

以上是生活随笔為你收集整理的pytorch指定用多张显卡训练_Pytorch多GPU训练的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。