日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

2020-12-19 nn.CrossEntropyLoss()

發布時間:2023/12/18 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 2020-12-19 nn.CrossEntropyLoss() 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

nn.CrossEntropyLoss()實例理解:

針對PICA的具體理解:

以下可理解為K*K的PUI中的某一行所對應的損失:

其中x可以理解為K*K的PUI中的某一行;cluster_index即指代在該行中所對應的元素;分母部分即為該行的所以元素進行累加和。

CrossEntropyLoss(input, target)

1.

input: entroy=nn.CrossEntropyLoss() input=torch.Tensor([[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562]]) target = torch.tensor([0, 0, 0]) # target = torch.arange(3) output = entroy(input, target) print(output) output : tensor(1.3447)

target對應某所得特征向量中第某個待求元素。

(1)
-x[0] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.7715 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.3447
(2)
-x[0] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.7715 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.3447
(3)
-x[0] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.7715 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.3447

loss = [(1) + (2) + (3)] /3 = 1.3447

2.

input: entroy=nn.CrossEntropyLoss() input=torch.Tensor([[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562]]) target = torch.tensor([1, 1, 1]) # target = torch.arange(3) output = entroy(input, target) print(output) output : tensor(1.1937)

(1)
-x[1] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.6205 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.1937
(2)
-x[1] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.6205 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.1937
(3)
-x[1] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.6205 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.1937

loss = [(1) + (2) + (3)] / 3 = 1.1937

3.

input: entroy=nn.CrossEntropyLoss() input=torch.Tensor([[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562]]) target = torch.tensor([2, 2, 2]) # target = torch.arange(3) output = entroy(input, target) print(output) output :tensor(0.8294)

(1)
-x[2] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.2562 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 0.8294
(2)
-x[2] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.2562 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 0.8294
(3)
-x[2] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.2562 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 0.8294

loss = [(1) + (2) + (3)] / 3 = 0.8294

4.

input: entroy=nn.CrossEntropyLoss() input=torch.Tensor([[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562]]) target = torch.tensor([0, 1, 2]) # 或 target = torch.arange(3) # target = torch.arange(3) output = entroy(input, target) print(output) output :tensor(1.1226)

(1)
-x[0] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.7715 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.3447
(2)
-x[1] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.6205+ log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.1937
(3)
-x[2] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.2562 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 0.8294

loss = [(1) + (2) + (3)] / 3 = 1.1226

總結

以上是生活随笔為你收集整理的2020-12-19 nn.CrossEntropyLoss()的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。