日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

学习笔记|Pytorch使用教程22(hook函数与CAM可视化)

發布時間:2023/12/31 编程问答 29 豆豆
生活随笔 收集整理的這篇文章主要介紹了 学习笔记|Pytorch使用教程22(hook函数与CAM可视化) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

學習筆記|Pytorch使用教程22

本學習筆記主要摘自“深度之眼”,做一個總結,方便查閱。
使用Pytorch版本為1.2

  • Hook函數概念
  • Hook函數與特征圖提取
  • CAM (class activation map,類激活圖)

一.Hook函數概念

Hook函數機制:不改變主體,實現額外功能,像一個掛件,掛鉤,hook

1.torch.Tensor.register_ hook(hook)
功能:注冊一個反向傳播hook函數

Hook函數僅一個輸入參數,為張量的梯度

計算圖與梯度求導:
y=(x+w)?(w+1)a=x+wb=w+1y=a?b?y?w=?y?a?a?w+?y?b?b?w=b+1+a+1=(w+a)+(x+w)=2+w+x+1=2+1+2+1=5\begin{aligned} y &=(x+w) *(w+1) \\ a &=x+w \quad b=w+1 \\ y &=a * b \\ \frac{\partial y}{\partial w} &=\frac{\partial y}{\partial a} \frac{\partial a}{\partial w}+\frac{\partial y}{\partial b} \frac{\partial b}{\partial w} \\ &=b+1+a+1 \\ &=(w+a)+(x+w) \\ &=2+w+x+1 \\ &=2+1+2+1=5 \end{aligned}yay?w?y??=(x+w)?(w+1)=x+wb=w+1=a?b=?a?y??w?a?+?b?y??w?b?=b+1+a+1=(w+a)+(x+w)=2+w+x+1=2+1+2+1=5?
在反向傳播結束后,非葉子節點a和b的梯度會被釋放掉?,F使用hook函數捕獲其梯度。
測試代碼:

import torch import torch.nn as nn from tools.common_tools import set_seedset_seed(1) # 設置隨機種子# ----------------------------------- 1 tensor hook 1 ----------------------------------- # flag = 0 flag = 1 if flag:w = torch.tensor([1.], requires_grad=True)x = torch.tensor([2.], requires_grad=True)a = torch.add(w, x)b = torch.add(w, 1)y = torch.mul(a, b)a_grad = list()def grad_hook(grad):a_grad.append(grad)handle = a.register_hook(grad_hook)y.backward()# 查看梯度print("gradient:", w.grad, x.grad, a.grad, b.grad, y.grad)print("a_grad[0]: ", a_grad[0])handle.remove()

輸出:

gradient: tensor([5.]) tensor([2.]) None None None a_grad[0]: tensor([2.])

tensor hook嘗試修改葉子節點梯度

# ----------------------------------- 2 tensor hook 2 ----------------------------------- # flag = 0 flag = 1 if flag:w = torch.tensor([1.], requires_grad=True)x = torch.tensor([2.], requires_grad=True)a = torch.add(w, x)b = torch.add(w, 1)y = torch.mul(a, b)a_grad = list()def grad_hook(grad):grad *= 2#return grad*3handle = w.register_hook(grad_hook)y.backward()# 查看梯度print("w.grad: ", w.grad)handle.remove()

輸出(需要保留上段代碼結果):

gradient: tensor([5.]) tensor([2.]) None None None a_grad[0]: tensor([2.]) w.grad: tensor([10.])

如果加上 return grad*3,會覆蓋原始張量梯度,輸出:

gradient: tensor([5.]) tensor([2.]) None None None a_grad[0]: tensor([2.]) w.grad: tensor([30.])

2.torch.nn.Module.register_forward _hook
功能:注冊module的前向傳播hook函數

參數:

  • module:當前網絡層
  • input :當前網絡層輸入數據
  • output:當前網絡層輸出數據


測試代碼:

# ----------------------------------- 3 Module.register_forward_hook and pre hook ----------------------------------- # flag = 0 flag = 1 if flag:class Net(nn.Module):def __init__(self):super(Net, self).__init__()self.conv1 = nn.Conv2d(1, 2, 3)self.pool1 = nn.MaxPool2d(2, 2)def forward(self, x):x = self.conv1(x)x = self.pool1(x)return xdef forward_hook(module, data_input, data_output):fmap_block.append(data_output)input_block.append(data_input)def forward_pre_hook(module, data_input):print("forward_pre_hook input:{}".format(data_input))def backward_hook(module, grad_input, grad_output):print("backward hook input:{}".format(grad_input))print("backward hook output:{}".format(grad_output))# 初始化網絡net = Net()net.conv1.weight[0].detach().fill_(1)net.conv1.weight[1].detach().fill_(2)net.conv1.bias.data.detach().zero_()# 注冊hookfmap_block = list()input_block = list()net.conv1.register_forward_hook(forward_hook)#net.conv1.register_forward_pre_hook(forward_pre_hook)#net.conv1.register_backward_hook(backward_hook)# inferencefake_img = torch.ones((1, 1, 4, 4)) # batch size * channel * H * Woutput = net(fake_img)"""loss_fnc = nn.L1Loss()target = torch.randn_like(output)loss = loss_fnc(target, output)loss.backward()"""# 觀察print("output shape: {}\noutput value: {}\n".format(output.shape, output))print("feature maps shape: {}\noutput value: {}\n".format(fmap_block[0].shape, fmap_block[0]))print("input shape: {}\ninput value: {}".format(input_block[0][0].shape, input_block[0]))

輸出:

output shape: torch.Size([1, 2, 1, 1]) output value: tensor([[[[ 9.]],[[18.]]]], grad_fn=<MaxPool2DWithIndicesBackward>)feature maps shape: torch.Size([1, 2, 2, 2]) output value: tensor([[[[ 9., 9.],[ 9., 9.]],[[18., 18.],[18., 18.]]]], grad_fn=<ThnnConv2DBackward>)input shape: torch.Size([1, 1, 4, 4]) input value: (tensor([[[[1., 1., 1., 1.],[1., 1., 1., 1.],[1., 1., 1., 1.],[1., 1., 1., 1.]]]]),)

查看hook函數運行機制,在該處設置斷點:output = net(fake_img),并進入(step into)

3.torch.nn.Module.register _forward_ pre_ hook
功能:注冊module前向傳播的hook函數

參數:

  • module:當前網絡層
  • input :當前網絡層輸入數據

4.torch.nn.Module.register_backward_hook
功能:注冊module反向傳播的hook函數

參數:

  • module:當前網絡層
  • grad_input :當前網絡層輸入梯度數據
  • grad_output :當前網絡層輸出梯度數據

完整代碼如上述所示,現在取消注釋。

# ----------------------------------- 3 Module.register_forward_hook and pre hook ----------------------------------- # flag = 0 flag = 1 if flag:class Net(nn.Module):def __init__(self):super(Net, self).__init__()self.conv1 = nn.Conv2d(1, 2, 3)self.pool1 = nn.MaxPool2d(2, 2)def forward(self, x):x = self.conv1(x)x = self.pool1(x)return xdef forward_hook(module, data_input, data_output):fmap_block.append(data_output)input_block.append(data_input)def forward_pre_hook(module, data_input):print("forward_pre_hook input:{}".format(data_input))def backward_hook(module, grad_input, grad_output):print("backward hook input:{}".format(grad_input))print("backward hook output:{}".format(grad_output))# 初始化網絡net = Net()net.conv1.weight[0].detach().fill_(1)net.conv1.weight[1].detach().fill_(2)net.conv1.bias.data.detach().zero_()# 注冊hookfmap_block = list()input_block = list()net.conv1.register_forward_hook(forward_hook)net.conv1.register_forward_pre_hook(forward_pre_hook)net.conv1.register_backward_hook(backward_hook)# inferencefake_img = torch.ones((1, 1, 4, 4)) # batch size * channel * H * Woutput = net(fake_img)loss_fnc = nn.L1Loss()target = torch.randn_like(output)loss = loss_fnc(target, output)loss.backward()# 觀察# print("output shape: {}\noutput value: {}\n".format(output.shape, output))# print("feature maps shape: {}\noutput value: {}\n".format(fmap_block[0].shape, fmap_block[0]))# print("input shape: {}\ninput value: {}".format(input_block[0][0].shape, input_block[0]))

輸出:

forward_pre_hook input:(tensor([[[[1., 1., 1., 1.],[1., 1., 1., 1.],[1., 1., 1., 1.],[1., 1., 1., 1.]]]]),) backward hook input:(None, tensor([[[[0.5000, 0.5000, 0.5000],[0.5000, 0.5000, 0.5000],[0.5000, 0.5000, 0.5000]]],[[[0.5000, 0.5000, 0.5000],[0.5000, 0.5000, 0.5000],[0.5000, 0.5000, 0.5000]]]]), tensor([0.5000, 0.5000])) backward hook output:(tensor([[[[0.5000, 0.0000],[0.0000, 0.0000]],[[0.5000, 0.0000],[0.0000, 0.0000]]]]),)

二.Hook函數與特征圖提取

測試代碼:

import torch.nn as nn import numpy as np from PIL import Image import torchvision.transforms as transforms import torchvision.utils as vutils from torch.utils.tensorboard import SummaryWriter from tools.common_tools import set_seed import torchvision.models as modelsset_seed(1) # 設置隨機種子# ----------------------------------- feature map visualization ----------------------------------- # flag = 0 flag = 1 if flag:writer = SummaryWriter(comment='test_your_comment', filename_suffix="_test_your_filename_suffix")# 數據path_img = "./lena.png" # your path to imagenormMean = [0.49139968, 0.48215827, 0.44653124]normStd = [0.24703233, 0.24348505, 0.26158768]norm_transform = transforms.Normalize(normMean, normStd)img_transforms = transforms.Compose([transforms.Resize((224, 224)),transforms.ToTensor(),norm_transform])img_pil = Image.open(path_img).convert('RGB')if img_transforms is not None:img_tensor = img_transforms(img_pil)img_tensor.unsqueeze_(0) # chw --> bchw# 模型alexnet = models.alexnet(pretrained=True)# 注冊hookfmap_dict = dict()for name, sub_module in alexnet.named_modules():if isinstance(sub_module, nn.Conv2d):key_name = str(sub_module.weight.shape)fmap_dict.setdefault(key_name, list())n1, n2 = name.split(".")def hook_func(m, i, o):key_name = str(m.weight.shape)fmap_dict[key_name].append(o)alexnet._modules[n1]._modules[n2].register_forward_hook(hook_func)# forwardoutput = alexnet(img_tensor)# add imagefor layer_name, fmap_list in fmap_dict.items():fmap = fmap_list[0]fmap.transpose_(0, 1)nrow = int(np.sqrt(fmap.shape[0]))fmap_grid = vutils.make_grid(fmap, normalize=True, scale_each=True, nrow=nrow)writer.add_image('feature map in {}'.format(layer_name), fmap_grid, global_step=322)

使用tensorboard,在當前路徑下輸入:tensorboard --logdir=./runs
在瀏覽器中進入:http://localhost:6006/

三.CAM (class activation map,類激活圖)



總結

以上是生活随笔為你收集整理的学习笔记|Pytorch使用教程22(hook函数与CAM可视化)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。