nn.Dropout
Dropout
torch.nn.Dropout(p=0.5, inplace=False)
- p – probability of an element to be zeroed. Default: 0.5
- inplace – If set to True, will do this operation in-place. Default: False
訓練過程中以概率P隨機的將參數置0,其中P為置0的概率,例如P=1表示將網絡參數全部置0
During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.
注意: Pytorch文檔中給出了一點,輸出的參數會以11?p\frac{1}{1-p}1?p1?進行一個縮放
Furthermore, the outputs are scaled by a factor of 11?p\frac{1}{1-p}1?p1?? during training. This means that during evaluation the module simply computes an identity function.
下面例子展示出在dropout之后,參數變為了原來的11?p=2\frac{1}{1-p} = 21?p1?=2倍
input = torch.tensor([[1, 2, 3],[4, 5, 6],[7, 8, 9]], dtype=torch.float64) input = torch.unsqueeze(input, 0) m = nn.Dropout(p = 0.5) output = m(input)print("input: ", input) print("output: ", output) print("input: ", input) ''' input: tensor([[[1., 2., 3.],[4., 5., 6.],[7., 8., 9.]]], dtype=torch.float64) output: tensor([[[ 2., 4., 0.],[ 0., 10., 12.],[ 0., 16., 0.]]], dtype=torch.float64) input: tensor([[[1., 2., 3.],[4., 5., 6.],[7., 8., 9.]]], dtype=torch.float64) '''當我們把nn.Dropout的inplace=True時,計算的結果就會替換掉原來的輸入input,如下:
input = torch.tensor([[1, 2, 3],[4, 5, 6],[7, 8, 9]], dtype=torch.float64) input = torch.unsqueeze(input, 0) m = nn.Dropout(p = 0.5, inplace=True) output = m(input)print("input: ", input) print("output: ", output) print("input: ", input) ''' input: tensor([[[1., 2., 3.],[4., 5., 6.],[7., 8., 9.]]], dtype=torch.float64) output: tensor([[[ 2., 4., 0.],[ 0., 10., 12.],[ 0., 16., 0.]]], dtype=torch.float64) input: tensor([[[ 2., 4., 0.],[ 0., 10., 12.],[ 0., 16., 0.]]], dtype=torch.float64) '''訓練與測試的不同
在訓練和測試的時候,nn.Dropout的表現是不同的,在訓練時nn.Dropout會以概率p隨機的丟棄一些神經元,但是在測試時,所有神經元都不會被丟棄,如下
import torch import torch.nn as nnclass Model(nn.Module):def __init__(self, p=0.0):super().__init__()self.drop_layer = nn.Dropout(p=p)def forward(self, inputs):return self.drop_layer(inputs)model = Model(p=0.5) # functional dropout # creating inputs inputs = torch.rand(10) # forwarding inputs in train mode print('Normal (train) model:') print('Model ', model(inputs))# switching to eval mode model.eval() # forwarding inputs in evaluation mode print('Evaluation mode:') print('Model ', model(inputs)) # show model summary print('Print summary:') print(model) ''' Normal (train) model: Model tensor([0.0000, 1.3517, 0.0000, 0.2766, 0.3060, 1.6334, 0.0000, 0.9740, 0.9118,0.0000])Evaluation mode: Model tensor([0.9284, 0.6758, 0.3947, 0.1383, 0.1530, 0.8167, 0.2038, 0.4870, 0.4559,0.2730]) Print summary: Model((drop_layer): Dropout(p=0.5, inplace=False) ) '''總結
以上是生活随笔為你收集整理的nn.Dropout的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: xtu 1397 Patchouli的金
- 下一篇: css3指南针效果