日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

pytorch指定用多张显卡训练_Pytorch多GPU训练

發布時間:2024/10/14 编程问答 50 豆豆
生活随笔 收集整理的這篇文章主要介紹了 pytorch指定用多张显卡训练_Pytorch多GPU训练 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Pytorch多GPU訓練

臨近放假, 服務器上的GPU好多空閑, 博主順便研究了一下如何用多卡同時訓練

原理

多卡訓練的基本過程

首先把模型加載到一個主設備

把模型只讀復制到多個設備

把大的batch數據也等分到不同的設備

最后將所有設備計算得到的梯度合并更新主設備上的模型參數

代碼實現(以Minist為例)

#!/usr/bin/python3

# coding: utf-8

import torch

from torchvision import datasets, transforms

import torchvision

from tqdm import tqdm

device_ids = [3, 4, 6, 7]

BATCH_SIZE = 64

transform = transforms.Compose([transforms.ToTensor(),

transforms.Normalize(mean=[0.5,0.5,0.5],std=[0.5,0.5,0.5])])

data_train = datasets.MNIST(root = "./data/",

transform=transform,

train = True,

download = True)

data_test = datasets.MNIST(root="./data/",

transform = transform,

train = False)

data_loader_train = torch.utils.data.DataLoader(dataset=data_train,

# 這里注意batch size要對應放大倍數

batch_size = BATCH_SIZE * len(device_ids),

shuffle = True,

num_workers=2)

data_loader_test = torch.utils.data.DataLoader(dataset=data_test,

batch_size = BATCH_SIZE * len(device_ids),

shuffle = True,

num_workers=2)

class Model(torch.nn.Module):

def __init__(self):

super(Model, self).__init__()

self.conv1 = torch.nn.Sequential(

torch.nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1),

torch.nn.ReLU(),

torch.nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),

torch.nn.ReLU(),

torch.nn.MaxPool2d(stride=2, kernel_size=2),

)

self.dense = torch.nn.Sequential(

torch.nn.Linear(14 * 14 * 128, 1024),

torch.nn.ReLU(),

torch.nn.Dropout(p=0.5),

torch.nn.Linear(1024, 10)

)

def forward(self, x):

x = self.conv1(x)

x = x.view(-1, 14 * 14 * 128)

x = self.dense(x)

return x

model = Model()

model = torch.nn.DataParallel(model, device_ids=device_ids) # 聲明所有可用設備

model = model.cuda(device=device_ids[0]) # 模型放在主設備

cost = torch.nn.CrossEntropyLoss()

optimizer = torch.optim.Adam(model.parameters())

n_epochs = 50

for epoch in range(n_epochs):

running_loss = 0.0

running_correct = 0

print("Epoch {}/{}".format(epoch, n_epochs))

print("-"*10)

for data in tqdm(data_loader_train):

X_train, y_train = data

# 注意數據也是放在主設備

X_train, y_train = X_train.cuda(device=device_ids[0]), y_train.cuda(device=device_ids[0])

outputs = model(X_train)

_,pred = torch.max(outputs.data, 1)

optimizer.zero_grad()

loss = cost(outputs, y_train)

loss.backward()

optimizer.step()

running_loss += loss.data.item()

running_correct += torch.sum(pred == y_train.data)

testing_correct = 0

for data in data_loader_test:

X_test, y_test = data

X_test, y_test = X_test.cuda(device=device_ids[0]), y_test.cuda(device=device_ids[0])

outputs = model(X_test)

_, pred = torch.max(outputs.data, 1)

testing_correct += torch.sum(pred == y_test.data)

print("Loss is:{:.4f}, Train Accuracy is:{:.4f}%, Test Accuracy is:{:.4f}".format(running_loss/len(data_train),

100*running_correct/len(data_train),

100*testing_correct/len(data_test)))

torch.save(model.state_dict(), "model_parameter.pkl")

結果分析

可以通過nvidia-smi清楚地看到3, 4, 6, 7卡在計算/usr/bin/python3進程(進程號都為34930)

從實際加速效果來看, 由于minist是小數據集, 可能調度帶來的overhead反而比計算的開銷大, 因此加速不明顯. 但是到大數據集上訓練時, 多卡的優勢就會體現出來了

總結

以上是生活随笔為你收集整理的pytorch指定用多张显卡训练_Pytorch多GPU训练的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。