日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 综合教程 >内容正文

综合教程

BCELoss和BCEWithLogitsLoss

發布時間:2024/8/26 综合教程 29 生活家
生活随笔 收集整理的這篇文章主要介紹了 BCELoss和BCEWithLogitsLoss 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

引自:https://www.cnblogs.com/jiangkejie/p/11207863.html

BCELoss

CLASStorch.nn.BCELoss(weight=None,size_average=None,reduce=None,reduction='mean')

創建一個標準來度量目標和輸出之間的二進制交叉熵。

unreduced (i.e. withreductionset to'none') 時該損失描述為:

其中N是批尺寸,如果reduction不是'none'(默認為'mean'), 則:

即,對批次中各樣本損失求均值或求和。

其可以用來測量重構誤差,例如一個自編碼器。注意目標y應該是0到1之間的數字。

Parameters:

weight(Tensor,optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of sizenbatch.

size_average(bool,optional) –(已棄用) Deprecated (seereduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored when reduce isFalse. Default:True

reduce(bool,optional) – Deprecated (已棄用)(seereduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:True

reduction(string,optional) – Specifies the reduction to apply to the output:'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'(指定返回各損失值,批損失均值,批損失和,默認返回批損失均值)

shape:

Input:(N, *)(N,?)where*?means, any number of additional dimensions

Target:(N, *)(N,?), same shape as the input

Output: scalar. Ifreductionis'none', then(N, *)(N,?), same shape as input.

源代碼:

 1 def binary_cross_entropy(input, target, weight=None, size_average=None,
 2                          reduce=None, reduction='elementwise_mean'):
 3     r"""Function that measures the Binary Cross Entropy
 4     between the target and the output.
 5 
 6     See :class:`~torch.nn.BCELoss` for details.
 7 
 8     Args:
 9         input: Tensor of arbitrary shape
10         target: Tensor of the same shape as input
11         weight (Tensor, optional): a manual rescaling weight
12                 if provided it's repeated to match input tensor shape
13         size_average (bool, optional): Deprecated (see :attr:`reduction`). By default,
14             the losses are averaged over each loss element in the batch. Note that for
15             some losses, there multiple elements per sample. If the field :attr:`size_average`
16             is set to ``False``, the losses are instead summed for each minibatch. Ignored
17             when reduce is ``False``. Default: ``True``
18         reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the
19             losses are averaged or summed over observations for each minibatch depending
20             on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per
21             batch element instead and ignores :attr:`size_average`. Default: ``True``
22         reduction (string, optional): Specifies the reduction to apply to the output:
23             'none' | 'elementwise_mean' | 'sum'. 'none': no reduction will be applied,
24             'elementwise_mean': the sum of the output will be divided by the number of
25             elements in the output, 'sum': the output will be summed. Note: :attr:`size_average`
26             and :attr:`reduce` are in the process of being deprecated, and in the meantime,
27             specifying either of those two args will override :attr:`reduction`. Default: 'elementwise_mean'
28 
29     Examples::
30 
31         >>> input = torch.randn((3, 2), requires_grad=True)
32         >>> target = torch.rand((3, 2), requires_grad=False)
33         >>> loss = F.binary_cross_entropy(F.sigmoid(input), target)
34         >>> loss.backward()
35     """
36     if size_average is not None or reduce is not None:
37         reduction = _Reduction.legacy_get_enum(size_average, reduce)
38     else:
39         reduction = _Reduction.get_enum(reduction)
40     if not (target.size() == input.size()):
41         warnings.warn("Using a target size ({}) that is different to the input size ({}) is deprecated. "
42                       "Please ensure they have the same size.".format(target.size(), input.size()))
43     if input.nelement() != target.nelement():
44         raise ValueError("Target and input must have the same number of elements. target nelement ({}) "
45                          "!= input nelement ({})".format(target.nelement(), input.nelement()))
46 
47     if weight is not None:
48         new_size = _infer_size(target.size(), weight.size())
49         weight = weight.expand(new_size)
50 
51     return torch._C._nn.binary_cross_entropy(input, target, weight, reduction)

BCEWithLogitsLoss(提高數值穩定性)

CLASStorch.nn.BCEWithLogitsLoss(weight=None,size_average=None,reduce=None,reduction='mean',pos_weight=None)

這個損失將Sigmoid層和BCELoss合并在一個類中。

這個版本在數值上比使用一個簡單的Sigmoid和一個BCELoss as更穩定,通過將操作合并到一個層中,我們利用log-sum-exp技巧來實現數值穩定性。

 1 def binary_cross_entropy_with_logits(input, target, weight=None, size_average=None,
 2                                      reduce=None, reduction='elementwise_mean', pos_weight=None):
 3     r"""Function that measures Binary Cross Entropy between target and output
 4     logits.
 5 
 6     See :class:`~torch.nn.BCEWithLogitsLoss` for details.
 7 
 8     Args:
 9         input: Tensor of arbitrary shape
10         target: Tensor of the same shape as input
11         weight (Tensor, optional): a manual rescaling weight
12             if provided it's repeated to match input tensor shape
13         size_average (bool, optional): Deprecated (see :attr:`reduction`). By default,
14             the losses are averaged over each loss element in the batch. Note that for
15             some losses, there multiple elements per sample. If the field :attr:`size_average`
16             is set to ``False``, the losses are instead summed for each minibatch. Ignored
17             when reduce is ``False``. Default: ``True``
18         reduce (bool, optional): Deprecated (see :attr:`reduction`). By default, the
19             losses are averaged or summed over observations for each minibatch depending
20             on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per
21             batch element instead and ignores :attr:`size_average`. Default: ``True``
22         reduction (string, optional): Specifies the reduction to apply to the output:
23             'none' | 'elementwise_mean' | 'sum'. 'none': no reduction will be applied,
24             'elementwise_mean': the sum of the output will be divided by the number of
25             elements in the output, 'sum': the output will be summed. Note: :attr:`size_average`
26             and :attr:`reduce` are in the process of being deprecated, and in the meantime,
27             specifying either of those two args will override :attr:`reduction`. Default: 'elementwise_mean'
28         pos_weight (Tensor, optional): a weight of positive examples.
29                 Must be a vector with length equal to the number of classes.
30 
31     Examples::
32 
33          >>> input = torch.randn(3, requires_grad=True)
34          >>> target = torch.empty(3).random_(2)
35          >>> loss = F.binary_cross_entropy_with_logits(input, target)
36          >>> loss.backward()
37     """
38     if size_average is not None or reduce is not None:
39         reduction = _Reduction.legacy_get_string(size_average, reduce)
40     if not (target.size() == input.size()):
41         raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
42 
43     max_val = (-input).clamp(min=0)
44 
45     if pos_weight is None:
46         loss = input - input * target + max_val + ((-max_val).exp() + (-input - max_val).exp()).log()
47     else:
48         log_weight = 1 + (pos_weight - 1) * target
49         loss = input - input * target + log_weight * (max_val + ((-max_val).exp() + (-input - max_val).exp()).log())
50 
51     if weight is not None:
52         loss = loss * weight
53 
54     if reduction == 'none':
55         return loss
56     elif reduction == 'elementwise_mean':
57         return loss.mean()
58     else:
59         return loss.sum()

總結

以上是生活随笔為你收集整理的BCELoss和BCEWithLogitsLoss的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。