日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

PyTorch 实现经典模型5:ResNet

發布時間:2025/4/5 编程问答 39 豆豆
生活随笔 收集整理的這篇文章主要介紹了 PyTorch 实现经典模型5:ResNet 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

ResNet

網絡結構


代碼

#------------------------------用50行代碼搭建ResNet------------------------------------------- from torch import nn import torch as t from torch.nn import functional as Fclass ResidualBlock(nn.Module):#實現子module: Residual Blockdef __init__(self,inchannel,outchannel,stride=1,shortcut=None):super(ResidualBlock,self).__init__()self.left=nn.Sequential(nn.Conv2d(inchannel,outchannel,3,stride,1,bias=False),nn.BatchNorm2d(outchannel),nn.ReLU(inplace=True),nn.Conv2d(outchannel,outchannel,3,1,1,bias=False),nn.BatchNorm2d(outchannel))self.right=shortcutdef forward(self,x):out=self.left(x)residual=x if self.right is None else self.right(x)out+=residualreturn F.relu(out)class ResNet(nn.Module):#實現主module:ResNet34#ResNet34包含多個layer,每個layer又包含多個residual block#用子module實現residual block , 用 _make_layer 函數實現layerdef __init__(self,num_classes=1000):super(ResNet,self).__init__()self.pre=nn.Sequential(nn.Conv2d(3,64,7,2,3,bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.MaxPool2d(3,2,1))#重復的layer,分別有3,4,6,3個residual blockself.layer1=self._make_layer(64,64,3)self.layer2=self._make_layer(64,128,4,stride=2)self.layer3=self._make_layer(128,256,6,stride=2)self.layer4=self._make_layer(256,512,3,stride=2)#分類用的全連接self.fc=nn.Linear(512,num_classes)def _make_layer(self,inchannel,outchannel,block_num,stride=1):#構建layer,包含多個residual blockshortcut=nn.Sequential(nn.Conv2d(inchannel,outchannel,1,stride,bias=False),nn.BatchNorm2d(outchannel))layers=[ ]layers.append(ResidualBlock(inchannel,outchannel,stride,shortcut))for i in range(1,block_num):layers.append(ResidualBlock(outchannel,outchannel))return nn.Sequential(*layers)def forward(self,x):x=self.pre(x)x=self.layer1(x)x=self.layer2(x)x=self.layer3(x)x=self.layer4(x)x=F.avg_pool2d(x,7)x=x.view(x.size(0),-1)return self.fc(x)resnet = ResNet() print(resnet)

輸出結果如下:

ResNet((pre): Sequential((0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False))(layer1): Sequential((0): ResidualBlock((left): Sequential((0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(right): Sequential((0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): ResidualBlock((left): Sequential((0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): ResidualBlock((left): Sequential((0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(layer2): Sequential((0): ResidualBlock((left): Sequential((0): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(right): Sequential((0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): ResidualBlock((left): Sequential((0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): ResidualBlock((left): Sequential((0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(3): ResidualBlock((left): Sequential((0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(layer3): Sequential((0): ResidualBlock((left): Sequential((0): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(right): Sequential((0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): ResidualBlock((left): Sequential((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): ResidualBlock((left): Sequential((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(3): ResidualBlock((left): Sequential((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(4): ResidualBlock((left): Sequential((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(5): ResidualBlock((left): Sequential((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(layer4): Sequential((0): ResidualBlock((left): Sequential((0): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(right): Sequential((0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): ResidualBlock((left): Sequential((0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): ResidualBlock((left): Sequential((0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(fc): Linear(in_features=512, out_features=1000, bias=True) )

論文一些注意點

擬合高維

深度神經網絡可以擬合高維非線性函數

  • 可以擬合 H(x)H(x)H(x),也可以擬合 H(x)?xH(x)-xH(x)?x
  • F(x)=H(x)?xF(x)=H(x)-xF(x)=H(x)?x,則 H(x)=F(x)+xH(x)=F(x)+xH(x)=F(x)+x
  • 網絡學習的難易程度不同。
  • 解決梯度消失問題,增加的網絡層學習成恒等映射,則不會提升訓練誤差。
跳層的恒等映射

殘差塊
y=F(x,{wi})+xy=F(x,\{w_i\})+xy=F(x,{wi?})+x
其中,FFF:需要學習的殘差映射,維度與 xxx 一致;F+xF+xF+x:跳躍連接,逐一加和,最后二者的輸出經過函數 ReLU;

沒有額外參數,不增加復雜度。
FFF 包含兩個或兩個以上網絡層,否則表現為線性層 y=w1x+xy=w_1x+xy=w1?x+x

殘差網絡

兩個簡單的設計原則:

  • 具有相同輸出特征圖大小的網絡卷積層卷積核數量相同;
  • 特征圖大小減半,則卷積數量增加一倍,保證時間復雜度。
  • 基于普通網絡,插入跳躍連接。
    維度增加時,增加零輸出,或 1x1 卷積改變維度。

    Ref

  • Pytorch學習(三)–用50行代碼搭建ResNet
  • Pytorch實戰2:ResNet-18實現Cifar-10圖像分類(測試集分類準確率95.170%)
  • 總結

    以上是生活随笔為你收集整理的PyTorch 实现经典模型5:ResNet的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。