日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

pytorch 常用的 loss function

發布時間:2025/3/19 编程问答 42 豆豆
生活随笔 收集整理的這篇文章主要介紹了 pytorch 常用的 loss function 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1 nn.L1Loss

loss(Xi,yi)=∣Xi?yi∣loss(X_{i}, y_{i}) = |X_{i}-y_{i}|loss(Xi?,yi?)=Xi??yi?
這里我們親自做一下實驗看看具體效果

# torch.nn.L1Lossimport torchl1_loss_fn = torch.nn.L1Loss(reduce=False, size_average=False) input = torch.autograd.Variable(torch.randn(3,4)) target = torch.autograd.Variable(torch.randn(3,4)) loss = l1_loss_fn(input, target) print(input) print(target) print(loss)res = torch.abs(input-target) print("loss computed by ourself") print(res)

結果如下:

tensor([[ 0.5152, -1.3686, 0.3119, -0.3094],[-0.3865, -0.2515, -1.4992, -0.2219],[ 0.3324, -0.3495, 0.8597, -0.0018]]) tensor([[ 1.3572, -0.9364, 1.0528, 0.4357],[-0.2460, 0.2986, -0.5723, -0.1117],[-1.1078, 1.1902, 1.4491, -0.2142]]) tensor([[0.8420, 0.4322, 0.7408, 0.7452],[0.1405, 0.5502, 0.9268, 0.1102],[1.4402, 1.5397, 0.5894, 0.2124]]) tensor([[0.8420, 0.4322, 0.7408, 0.7452],[0.1405, 0.5502, 0.9268, 0.1102],[1.4402, 1.5397, 0.5894, 0.2124]]) tensor(8.2697)

另外,reduce參數和size_average參數有不同的搭配組合。

  • 當reduce=False時,忽略size_average參數,此時輸出的結果的維度與輸入一致
  • 當reduce=True,size_average=True時,此時輸出的結果=torch.mean(torch.abs(input-target))
  • 當reduce=True,size_average=False時,此時的輸出結果=`torch.sum(torch.abs(input-target))
  • 其他Loss function也具有reduce和size_average這兩個參數,且作用類似

    2 nn.MSELoss()

    loss(Xi,yi)=(Xi?yi)2loss(X_{i},y_{i}) = (X_{i}-y_{i})^{2}loss(Xi?,yi?)=(Xi??yi?)2

    # torch.nn.MSELossimport torchMSE_loss_fn = torch.nn.MSELoss(reduce=False, size_average=False) input = torch.autograd.Variable(torch.randn(3,4)) target = torch.autograd.Variable(torch.randn(3,4)) loss = MSE_loss_fn(input, target) print(input) print(target) print(loss)res=input-target print(res*res)

    結果如下:

    tensor([[ 0.3487, 0.4603, -0.3404, -0.2632],[ 0.5376, -1.0239, -1.5926, -1.2581],[ 0.8796, 0.4397, -0.2821, 0.0028]]) tensor([[ 0.9949, 2.3588, 0.1053, -1.2758],[-0.5526, -1.0309, 0.9014, -0.0308],[ 0.9400, 1.1123, 0.3666, -0.5454]]) tensor([[4.1764e-01, 3.6046e+00, 1.9869e-01, 1.0253e+00],[1.1884e+00, 4.8598e-05, 6.2203e+00, 1.5061e+00],[3.6437e-03, 4.5234e-01, 4.2089e-01, 3.0058e-01]]) tensor([[4.1764e-01, 3.6046e+00, 1.9869e-01, 1.0253e+00],[1.1884e+00, 4.8598e-05, 6.2203e+00, 1.5061e+00],[3.6437e-03, 4.5234e-01, 4.2089e-01, 3.0058e-01]])

    3 nn.BCELoss()

    BCELoss是二分類使用的交叉熵,用之前需要在該層前面加上Sigmoid函數。
    loss(Xi,yi)=?wi[yilogxi+(1?yi)log(1?xi)]loss(X_{i},y_{i}) = -w_{i} [y_{i}logx_{i} + (1-y_{i})log(1-x_{i})]loss(Xi?,yi?)=?wi?[yi?logxi?+1?yi?)log(1?xi?)]

    import torch import torch.nn.functional as FBCE_loss_fn = torch.nn.BCELoss(reduce=False, size_average=False) BCE_logid_loss = torch.nn.BCEWithLogitsLoss(reduce=False, size_average=False)input=torch.autograd.Variable(torch.randn(3,4)) target=torch.autograd.Variable(torch.FloatTensor(3,4).random_(2)) loss = BCE_loss_fn(F.sigmoid(input), target) print(input) print(target) print(loss) print(BCE_logid_loss(input,target))

    結果如下:

    tensor([[-0.2960, -0.6593, 0.7279, -1.1125],[ 0.9475, 0.5286, 1.6567, -0.2942],[-0.0741, 2.1198, 0.9491, 0.7699]]) tensor([[1., 1., 1., 1.],[1., 0., 0., 0.],[1., 0., 1., 1.]]) tensor([[0.8521, 1.0762, 0.3940, 1.3967],[0.3277, 0.9920, 1.8313, 0.5568],[0.7309, 2.2332, 0.3272, 0.3805]]) tensor([[0.8521, 1.0762, 0.3940, 1.3967],[0.3277, 0.9920, 1.8313, 0.5568],[0.7309, 2.2332, 0.3272, 0.3805]])

    可以看出,loss, x, y, w 的維度都是一樣的。
    此外,使用nn.BCEWithLogitsLoss是不需要使用Sigmoid層

    4 nn.CrossEntropyLoss

    該函數用于多分類,不需要加softmax層
    loss(x,label)=?wlabellogexlabel∑j=1Nexjloss(x,label)=-w_{label}log\frac{e^{x_{label}}}{\sum_{j=1}^{N}e^{x_{j}}}loss(x,label)=?wlabel?logj=1N?exj?exlabel??

    import torchloss_fn = torch.nn.CrossEntropyLoss(reduce=False, size_average=False) input=torch.autograd.Variable(torch.randn(3,4)) target=torch.autograd.Variable(torch.LongTensor(3).random_(4)) loss = loss_fn(input, target)print(input) print(target) print(loss)

    結果如下:

    tensor([[-0.2541, 0.5136, 1.2984, -0.1278],[ 1.4406, 2.6949, 1.9780, 1.8310],[-0.1522, 1.7501, -1.0701, -0.3558]]) tensor([1, 3, 3]) tensor([1.4309, 1.6501, 2.3915])

    參考

    參考

    與50位技術專家面對面20年技術見證,附贈技術全景圖

    總結

    以上是生活随笔為你收集整理的pytorch 常用的 loss function的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。