2020-12-19 nn.CrossEntropyLoss()
nn.CrossEntropyLoss()實例理解:
針對PICA的具體理解:
以下可理解為K*K的PUI中的某一行所對應的損失:
其中x可以理解為K*K的PUI中的某一行;cluster_index即指代在該行中所對應的元素;分母部分即為該行的所以元素進行累加和。
CrossEntropyLoss(input, target)
1.
input: entroy=nn.CrossEntropyLoss() input=torch.Tensor([[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562]]) target = torch.tensor([0, 0, 0]) # target = torch.arange(3) output = entroy(input, target) print(output) output : tensor(1.3447)target對應某所得特征向量中第某個待求元素。
(1)
-x[0] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.7715 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.3447
(2)
-x[0] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.7715 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.3447
(3)
-x[0] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.7715 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.3447
loss = [(1) + (2) + (3)] /3 = 1.3447
2.
input: entroy=nn.CrossEntropyLoss() input=torch.Tensor([[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562]]) target = torch.tensor([1, 1, 1]) # target = torch.arange(3) output = entroy(input, target) print(output) output : tensor(1.1937)(1)
-x[1] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.6205 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.1937
(2)
-x[1] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.6205 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.1937
(3)
-x[1] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.6205 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.1937
loss = [(1) + (2) + (3)] / 3 = 1.1937
3.
input: entroy=nn.CrossEntropyLoss() input=torch.Tensor([[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562]]) target = torch.tensor([2, 2, 2]) # target = torch.arange(3) output = entroy(input, target) print(output) output :tensor(0.8294)(1)
-x[2] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.2562 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 0.8294
(2)
-x[2] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.2562 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 0.8294
(3)
-x[2] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.2562 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 0.8294
loss = [(1) + (2) + (3)] / 3 = 0.8294
4.
input: entroy=nn.CrossEntropyLoss() input=torch.Tensor([[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562],[-0.7715, -0.6205, -0.2562]]) target = torch.tensor([0, 1, 2]) # 或 target = torch.arange(3) # target = torch.arange(3) output = entroy(input, target) print(output) output :tensor(1.1226)(1)
-x[0] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.7715 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.3447
(2)
-x[1] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.6205+ log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 1.1937
(3)
-x[2] + log(exp(x[0]), exp(x[1]), exp(x[2])) =
0.2562 + log(exp(-0.7715) + exp(-0.6205) + exp(-0.2562)) = 0.8294
loss = [(1) + (2) + (3)] / 3 = 1.1226
總結
以上是生活随笔為你收集整理的2020-12-19 nn.CrossEntropyLoss()的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: pppo服务器光信号亮红灯,光纤猫光信号
- 下一篇: 走进小作坊(十)----长尾效应