日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

resnet18到resnet152模型pytorch实现

發布時間:2024/1/23 编程问答 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 resnet18到resnet152模型pytorch实现 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

????????resnet在深度學習領域的重要性不言而喻,自從15年resnet提出后,被各種深度學習模型大量引用。得益于其殘差結構的設計,使得深度學習模型可以訓練更深層的網絡。常見的resnet有resnet18、resnet34、resnet50、resnet101、resnet152幾種結構,resnet殘差網絡由一個卷積塊和四個殘差塊組成,每個殘差塊包含多個殘差結構。從殘差結構上看,resnet18和resnet34結構近似,使用相同的殘差結構,僅在殘差結構數量上有區別,都是使用兩個卷積層加一個殘差連接層作為殘差結構。

resnet18與resnet34的殘差結構:

? ? ? ? 輸入特征圖x經過兩個卷積核大小為3*3的卷積層,使用padding為1的填充保持特征圖大小不變,每個卷積層后接BN層和Relu層,完成兩次卷積核將結果與輸入特征圖x直接相加。resnet50到resnet152使用另一種殘差結構,這種殘差結構使用兩個1*1的卷積核加一個3*3的卷積核。

resnet18resnet34resnet50resnet101resnet152
block11*conv1*conv1*conv1*conv1*conv
block22*res(2*conv)3*res(2*conv)3*res(3*conv)3*res(3*conv)3*res(3*conv)
block32*residual(2*conv)4*res(2*conv)4*res(3*conv)4*res(3*conv)8*res(3*conv)
block42*residual(2*conv)6*res(2*conv)6*res(3*conv)23*res(3*conv)36*res(3*conv)
block52*residual(2*conv)3*res(2*conv)3*res(3*conv)3*res(3*conv)3*res(3*conv)
FNlinearlinearlinearlinearlinear

resnet18的模型實現:

import torch.nn as nn from torch.nn import functional as F# 殘差單元 class Residual(nn.Module):def __init__(self, input_channel, out_channel, use_conv1x1=False, strides=1):super().__init__()self.conv1 = nn.Conv2d(input_channel, out_channel, kernel_size=3, stride=strides, padding=1)self.conv2 = nn.Conv2d(out_channel, out_channel, kernel_size=3, stride=1, padding=1)if use_conv1x1:self.conv3 = nn.Conv2d(input_channel, out_channel, kernel_size=1, stride=strides)else:self.conv3 = Noneself.bn1 = nn.BatchNorm2d(out_channel)self.bn2 = nn.BatchNorm2d(out_channel)self.relu = nn.ReLU(inplace=True)def forward(self, X):Y = self.relu(self.bn1(self.conv1(X)))Y = self.bn2(self.conv2(Y))if self.conv3:X = self.conv3(X)Y += Xreturn F.relu(Y)# 多個殘差單元組成的殘差塊 def res_block(input_channel, out_channel, num_residuals, first_block=False):blk = []for i in range(num_residuals):if i == 0 and not first_block:blk.append(Residual(input_channel, out_channel, use_conv1x1=True, strides=2))else:blk.append(Residual(out_channel, out_channel))return blkdef resnet18(num_channel, classes):block_1 = nn.Sequential(nn.Conv2d(num_channel, 64, kernel_size=7, stride=2, padding=3), nn.BatchNorm2d(64),nn.ReLU(),nn.MaxPool2d(kernel_size=3, stride=2, padding=1) )block_2 = nn.Sequential(*res_block(64, 64, 2, first_block=True))block_3 = nn.Sequential(*res_block(64, 128, 2))block_4 = nn.Sequential(*res_block(128, 256, 2))block_5 = nn.Sequential(*res_block(256, 512, 2))model = nn.Sequential(block_1, block_2, block_3, block_4, block_5,nn.AdaptiveAvgPool2d((1, 1)),nn.Flatten(),nn.Linear(512, classes))return model

?resnet34的實現:

def resnet34(classes):b1 = nn.Sequential(nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3), # [3,224,224]-->[1,112,112]nn.BatchNorm2d(64),nn.ReLU(),nn.MaxPool2d(kernel_size=3, stride=2, padding=1) # [1,110,110]-->[1,56,56])b2 = nn.Sequential(*res_block(64, 64, 3, first_block=True))b3 = nn.Sequential(*res_block(64, 128, 4))b4 = nn.Sequential(*res_block(128, 256, 6))b5 = nn.Sequential(*res_block(256, 512, 3))model = nn.Sequential(b1, b2, b3, b4, b5,nn.AdaptiveAvgPool2d((1, 1)),nn.Flatten(),nn.Linear(512, classes))return model

???????其中殘差連接1*1的卷積核,在每個殘差塊第一個殘差結構中對輸入特征圖使用用于改變輸入特征圖通道數量,對除第一個殘差塊之外的其他殘差塊輸入特征圖進行特征縮放。當輸入圖像為[3,224,224],經過第一個卷積層,特征圖變為[64,56,56]。由于之前的卷積層和池化層已經進行了兩次特征縮放,在設計的時候,在第一個殘差塊沒有對特征圖做縮放,也沒有改變通道數。之后的每個殘差塊,都會對特征圖進行大小減半,通道加倍,因此在后面三個殘差塊中,第一個殘差單元需要對輸入通道進行1*1卷積核加步長為2的操作,以改變特征圖大小和通道數,同時對其中第一個卷積層使用步長為2,以保證最后相加時特征圖大小和通道數一致。

resnet50模型實現:

import torch import torch.nn as nn from torch.nn import functional as F# 殘差單元 class Residual(nn.Module):def __init__(self, input_channel, out_channel, use_conv1x1=False, strides=1):super().__init__()self.conv1 = nn.Conv2d(input_channel, int(out_channel / 4), kernel_size=1, stride=strides)self.conv2 = nn.Conv2d(int(out_channel / 4), int(out_channel / 4), kernel_size=3, stride=1, padding=1)self.conv3 = nn.Conv2d(int(out_channel / 4), out_channel, kernel_size=1)if use_conv1x1:self.conv4 = nn.Conv2d(input_channel, out_channel, kernel_size=1, stride=strides)else:self.conv4 = Noneself.bn1 = nn.BatchNorm2d(int(out_channel / 4))self.bn2 = nn.BatchNorm2d(out_channel)self.relu = nn.ReLU(inplace=True)def forward(self, X):Y = self.relu(self.bn1(self.conv1(X)))Y = self.relu(self.bn1(self.conv2(Y)))Y = self.bn2(self.conv3(Y))if self.conv4:X = self.conv4(X)Y += Xreturn F.relu(Y)# 多個殘差單元組成的殘差塊 def res_block(input_channel, out_channel, num_residuals, first_block=False):blk = []for i in range(num_residuals):if i == 0 and not first_block:blk.append(Residual(input_channel, out_channel, use_conv1x1=True, strides=2))input_channel = out_channelif i == 0 and first_block:blk.append(Residual(input_channel, out_channel, use_conv1x1=True, strides=1))input_channel = out_channelelse:blk.append(Residual(input_channel, out_channel))return blkdef resnet50(num_channel, classes):b1 = nn.Sequential(nn.Conv2d(num_channel, 64, kernel_size=7, stride=2, padding=3),nn.BatchNorm2d(64),nn.ReLU(),nn.MaxPool2d(kernel_size=3, stride=2, padding=1))b2 = nn.Sequential(*res_block(64, 256, 3, first_block=True))b3 = nn.Sequential(*res_block(256, 512, 4))b4 = nn.Sequential(*res_block(512, 1024, 6))b5 = nn.Sequential(*res_block(1024, 2048, 3))model = nn.Sequential(b1, b2, b3, b4, b5,nn.AdaptiveAvgPool2d((1, 1)),nn.Flatten(),nn.Linear(2048, classes))return modelnet = resnet50(num_channel=3, classes=2) x = torch.rand(size=(1, 3, 224, 224), dtype=torch.float32) for layer in net:print(layer)x = layer(x)print(x.shape)

?resnet101實現:

def resnet101(num_channel, classes):b1 = nn.Sequential(nn.Conv2d(num_channel, 64, kernel_size=7, stride=2, padding=3), # [3,224,224]-->[64,112,112]nn.BatchNorm2d(64),nn.ReLU(),nn.MaxPool2d(kernel_size=3, stride=2, padding=1) # [1,110,110]-->[1,56,56])b2 = nn.Sequential(*res_block(64, 256, 3, first_block=True))b3 = nn.Sequential(*res_block(256, 512, 4))b4 = nn.Sequential(*res_block(512, 1024, 23))b5 = nn.Sequential(*res_block(1024, 2048, 3))model = nn.Sequential(b1, b2, b3, b4, b5,nn.AdaptiveAvgPool2d((1, 1)),nn.Flatten(),nn.Linear(2048, classes))return model

resnet152實現:

def resnet152(num_channel, classes):b1 = nn.Sequential(nn.Conv2d(num_channel, 64, kernel_size=7, stride=2, padding=3), # [3,224,224]-->[64,112,112]nn.BatchNorm2d(64),nn.ReLU(),nn.MaxPool2d(kernel_size=3, stride=2, padding=1) # [64,112,112]-->[64,56,56])b2 = nn.Sequential(*res_block(64, 256, 3, first_block=True))b3 = nn.Sequential(*res_block(256, 512, 8))b4 = nn.Sequential(*res_block(512, 1024, 36))b5 = nn.Sequential(*res_block(1024, 2048, 3))model = nn.Sequential(b1, b2, b3, b4, b5,nn.AdaptiveAvgPool2d((1, 1)),nn.Flatten(),nn.Linear(2048, classes))return model

? ? ? resnet50之后的網絡出了采用3個卷結合,在殘差結構內部也有一些區別,在殘差結構內第一個卷積層和第二個卷積層輸出通道為殘差塊輸出通道的四分之一,并且第一個殘差塊需要對特征圖通道進行變換,那么在代碼實現上需要對第一個殘差塊的第一個殘差結構使用1*1卷積核進行輸入特征圖的通道變換。

總結

以上是生活随笔為你收集整理的resnet18到resnet152模型pytorch实现的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。