Pytorch中的variable, tensor与numpy相互转化
生活随笔
收集整理的這篇文章主要介紹了
Pytorch中的variable, tensor与numpy相互转化
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
來源:https://blog.csdn.net/m0_37592397/article/details/88327248
1.將numpy矩陣轉換為Tensor張量
sub_ts = torch.from_numpy(sub_img) #sub_img為numpy類型- 1
2.將Tensor張量轉化為numpy矩陣
sub_np1 = sub_ts.numpy() #sub_ts為tensor張量- 1
3.將numpy轉換為Variable
sub_va = Variable(torch.from_numpy(sub_img))- 1
4.將Variable張量轉化為numpy
sub_np2 = sub_va.data.numpy()- 1
- 2
應用實例1
# coding=utf-8 import numpy as np import torch """Pytorch中tensor了解"""def just_try():# Tensor可以認為是一個高維數組,和numpy相似,但tensor可以用GPU加速x = torch.FloatTensor(5, 3) # [torch.FloatTensor of size 5x3],預先分配空間print('x: {}'.format(x))print('x.size(): {}'.format(x.size())) # torch.Size([5, 3], torch.Size是tuple對象的子類,支持tuple的所有操作y_tensor_gpu = x.cuda() # 轉化為在GPU上的tensorprint('y_tensor_gpu: {}'.format(y_tensor_gpu))def multiple_add():x = torch.FloatTensor(3, 2)y = torch.FloatTensor(3, 2)# 第一種加法result_1 = x + y# 第二種加法result_2 = torch.add(x, y)# 第三種加法result_3 = torch.FloatTensor(3, 2)torch.add(x, y, out=result_3)print('result_1: {}'.format(result_1))print('result_1.size():{}'.format(result_1.size()))print('result_2: {}'.format(result_2))print('result_2.size():{}'.format(result_2.size()))print('result_3: {}'.format(result_3))print('result_3.size():{}'.format(result_3.size()))def inplace_operation():x = torch.FloatTensor(3, 2)y = torch.FloatTensor(3, 2)print('original y: {}'.format(y))# 普通加法,不改變原始的y值result_common = y.__add__(x)print('common add, result_common: {}'.format(result_common))print('common add, y: {}'.format(y))# inplace 加法,改變y值y.__iadd__(x)print('inplace add, y: {}'.format(y))def tensor_vs_numpy():"""tensor 和 numpy 之間的聯系:return: """y = torch.FloatTensor(3, 2)print('y: {}'.format(y))# tensor的slice操作與numpy類似print('y slice: {}'.format(y[:, 1]))# 技巧: tensor與numpy之間的轉換,互操作比較容易且快速,# Tensor不支持的操作,可以先轉換為numpy數組處理,之后再轉回tensoraa_tensor = torch.ones(3, 2)print('orignal aa_tensor: {}'.format(aa_tensor))# tensor ---> numpybb_numpy = aa_tensor.numpy() # Note: tensor和numpy之間共享內存,所以他們之間的轉換很快,同時也意味著如果其中一個變了,# 另外一個也會隨之改變print('bb_numpy: {}'.format(bb_numpy))# numpy ---> tensorcc_tensor = torch.from_numpy(bb_numpy)print('cc_tensor: {}'.format(cc_tensor))bb_numpy += 1print('after adding one, bb_numpy: {}'.format(bb_numpy))print('after adding one, aa_tensor: {}'.format(aa_tensor))print('after adding one, cc_tensor: {}'.format(cc_tensor))if __name__ == '__main__':just_try()print("********************")multiple_add()print("********************")inplace_operation()print("********************")tensor_vs_numpy()- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
運行結果
x: tensor([[ 8.4735e-01, 4.5852e-41, 1.4709e-28],[ 3.0645e-41, 9.5032e-04, 4.5852e-41],[ 2.5129e-39, 4.5852e-41, -4.3164e-02],[ 4.5850e-41, 2.6068e-39, 4.5852e-41],[ 3.0926e+00, 4.5852e-41, 2.5129e-39]]) x.size(): torch.Size([5, 3]) y_tensor_gpu: tensor([[ 8.4735e-01, 4.5852e-41, 1.4709e-28],[ 3.0645e-41, 9.5032e-04, 4.5852e-41],[ 2.5129e-39, 4.5852e-41, -4.3164e-02],[ 4.5850e-41, 2.6068e-39, 4.5852e-41],[ 3.0926e+00, 4.5852e-41, 2.5129e-39]], device='cuda:0') ******************** result_1: tensor([[8.4734e-01, 7.6497e-41],[2.3627e-29, 3.0672e-41],[2.2296e-29, 3.0645e-41]]) result_1.size():torch.Size([3, 2]) result_2: tensor([[8.4734e-01, 7.6497e-41],[2.3627e-29, 3.0672e-41],[2.2296e-29, 3.0645e-41]]) result_2.size():torch.Size([3, 2]) result_3: tensor([[8.4734e-01, 7.6497e-41],[2.3627e-29, 3.0672e-41],[2.2296e-29, 3.0645e-41]]) result_3.size():torch.Size([3, 2]) ******************** original y: tensor([[1.4718e-28, 3.0645e-41],[2.3627e-29, 3.0672e-41],[2.2296e-29, 3.0645e-41]]) common add, result_common: tensor([[1.7051e-28, 6.1290e-41],[4.7253e-29, 6.1343e-41],[4.4592e-29, 6.1290e-41]]) common add, y: tensor([[1.4718e-28, 3.0645e-41],[2.3627e-29, 3.0672e-41],[2.2296e-29, 3.0645e-41]]) inplace add, y: tensor([[1.7051e-28, 6.1290e-41],[4.7253e-29, 6.1343e-41],[4.4592e-29, 6.1290e-41]]) ******************** y: tensor([[1.4718e-28, 3.0645e-41],[4.7253e-29, 6.1343e-41],[4.4592e-29, 6.1290e-41]]) y slice: tensor([3.0645e-41, 6.1343e-41, 6.1290e-41]) orignal aa_tensor: tensor([[1., 1.],[1., 1.],[1., 1.]]) bb_numpy: [[1. 1.][1. 1.][1. 1.]] cc_tensor: tensor([[1., 1.],[1., 1.],[1., 1.]]) after adding one, bb_numpy: [[2. 2.][2. 2.][2. 2.]] after adding one, aa_tensor: tensor([[2., 2.],[2., 2.],[2., 2.]]) after adding one, cc_tensor: tensor([[2., 2.],[2., 2.],[2., 2.]])- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
應用實例2
# coding=utf-8 import numpy as np import torch from torch.autograd import Variable"""pytorch中Variable了解""" """ Variable是Pytorch中autograd自動微分模塊的核心。 它封裝了Tensor,支持幾乎所有的tensor操作。 主要包含如下3個屬性: 1. data: 保存Variable所包含的Tensor 2. grad: 保存data對應的梯度,grad也是一個Variable,而不是一個Tensor,和data的形狀一樣 3. grad_fn: 指向一個Function對象,這個Function用來反向傳播計算輸入的梯度 """def about_variable():x = Variable(torch.ones(3, 2), requires_grad=True)y = x.detach().numpy()z = torch.from_numpy(y)print('x: {}'.format(x))print('***************')print('y: {}'.format(y))print('***************')print('z: {}'.format(z))print('***************')print('x.data: {}'.format(x.data))print('***************')print('x.grad: {}'.format(x.grad))# Variable和Tensor具有幾乎一致的接口aa_variable = Variable(torch.ones(3, 2))print('torch.cos(aa_variable): {}'.format(torch.cos(aa_variable)))print('torch.cos(aa_variable.data): {}'.format(torch.cos(aa_variable.data)))if __name__ == '__main__':about_variable()- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
運行結果
x: tensor([[1., 1.],[1., 1.],[1., 1.]], requires_grad=True) *************** y: [[1. 1.][1. 1.][1. 1.]] *************** z: tensor([[1., 1.],[1., 1.],[1., 1.]]) *************** x.data: tensor([[1., 1.],[1., 1.],[1., 1.]]) *************** x.grad: None torch.cos(aa_variable): tensor([[0.5403, 0.5403],[0.5403, 0.5403],[0.5403, 0.5403]]) torch.cos(aa_variable.data): tensor([[0.5403, 0.5403],[0.5403, 0.5403],[0.5403, 0.5403]])總結
以上是生活随笔為你收集整理的Pytorch中的variable, tensor与numpy相互转化的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 奥比岛贴吧挂人(奥比岛贴吧)
- 下一篇: 消防水池有效容积计算例题(消防水池有效容