日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

nn.Dropout

發布時間:2024/3/12 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 nn.Dropout 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Dropout

torch.nn.Dropout(p=0.5, inplace=False)

  • p – probability of an element to be zeroed. Default: 0.5
  • inplace – If set to True, will do this operation in-place. Default: False

訓練過程中以概率P隨機的將參數置0,其中P為置0的概率,例如P=1表示將網絡參數全部置0

During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.

注意: Pytorch文檔中給出了一點,輸出的參數會以11?p\frac{1}{1-p}1?p1?進行一個縮放

Furthermore, the outputs are scaled by a factor of 11?p\frac{1}{1-p}1?p1?? during training. This means that during evaluation the module simply computes an identity function.

下面例子展示出在dropout之后,參數變為了原來的11?p=2\frac{1}{1-p} = 21?p1?=2

input = torch.tensor([[1, 2, 3],[4, 5, 6],[7, 8, 9]], dtype=torch.float64) input = torch.unsqueeze(input, 0) m = nn.Dropout(p = 0.5) output = m(input)print("input: ", input) print("output: ", output) print("input: ", input) ''' input: tensor([[[1., 2., 3.],[4., 5., 6.],[7., 8., 9.]]], dtype=torch.float64) output: tensor([[[ 2., 4., 0.],[ 0., 10., 12.],[ 0., 16., 0.]]], dtype=torch.float64) input: tensor([[[1., 2., 3.],[4., 5., 6.],[7., 8., 9.]]], dtype=torch.float64) '''

當我們把nn.Dropout的inplace=True時,計算的結果就會替換掉原來的輸入input,如下:

input = torch.tensor([[1, 2, 3],[4, 5, 6],[7, 8, 9]], dtype=torch.float64) input = torch.unsqueeze(input, 0) m = nn.Dropout(p = 0.5, inplace=True) output = m(input)print("input: ", input) print("output: ", output) print("input: ", input) ''' input: tensor([[[1., 2., 3.],[4., 5., 6.],[7., 8., 9.]]], dtype=torch.float64) output: tensor([[[ 2., 4., 0.],[ 0., 10., 12.],[ 0., 16., 0.]]], dtype=torch.float64) input: tensor([[[ 2., 4., 0.],[ 0., 10., 12.],[ 0., 16., 0.]]], dtype=torch.float64) '''

訓練與測試的不同

在訓練和測試的時候,nn.Dropout的表現是不同的,在訓練時nn.Dropout會以概率p隨機的丟棄一些神經元,但是在測試時,所有神經元都不會被丟棄,如下

import torch import torch.nn as nnclass Model(nn.Module):def __init__(self, p=0.0):super().__init__()self.drop_layer = nn.Dropout(p=p)def forward(self, inputs):return self.drop_layer(inputs)model = Model(p=0.5) # functional dropout # creating inputs inputs = torch.rand(10) # forwarding inputs in train mode print('Normal (train) model:') print('Model ', model(inputs))# switching to eval mode model.eval() # forwarding inputs in evaluation mode print('Evaluation mode:') print('Model ', model(inputs)) # show model summary print('Print summary:') print(model) ''' Normal (train) model: Model tensor([0.0000, 1.3517, 0.0000, 0.2766, 0.3060, 1.6334, 0.0000, 0.9740, 0.9118,0.0000])Evaluation mode: Model tensor([0.9284, 0.6758, 0.3947, 0.1383, 0.1530, 0.8167, 0.2038, 0.4870, 0.4559,0.2730]) Print summary: Model((drop_layer): Dropout(p=0.5, inplace=False) ) '''

總結

以上是生活随笔為你收集整理的nn.Dropout的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。