日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

深度学总结:CNN Decoder, Upsampling的处理

發布時間:2024/9/15 编程问答 44 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学总结:CNN Decoder, Upsampling的处理 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

CNN Decoder, 需要做Upsampling:

金字塔結構收縮的dimensions要膨脹回來,處理方法就是Upsampling,直接復制(容易產生棋盤條紋),或者做內插interpolation,然后再做convolution:

# define the NN architecture class ConvAutoencoder(nn.Module):def __init__(self):super(ConvAutoencoder, self).__init__()## encoder layers ### conv layer (depth from 1 --> 16), 3x3 kernelsself.conv1 = nn.Conv2d(1, 16, 3, padding=1) # conv layer (depth from 16 --> 8), 3x3 kernelsself.conv2 = nn.Conv2d(16, 4, 3, padding=1)# pooling layer to reduce x-y dims by two; kernel and stride of 2self.pool = nn.MaxPool2d(2, 2)## decoder layers ##self.conv4 = nn.Conv2d(4, 16, 3, padding=1)self.conv5 = nn.Conv2d(16, 1, 3, padding=1)def forward(self, x):# add layer, with relu activation function# and maxpooling afterx = F.relu(self.conv1(x))x = self.pool(x)# add hidden layer, with relu activation functionx = F.relu(self.conv2(x))x = self.pool(x) # compressed representation## decoder # upsample, followed by a conv layer, with relu activation function # this function is called `interpolate` in some PyTorch versionsx = F.upsample(x, scale_factor=2, mode='nearest')x = F.relu(self.conv4(x))# upsample again, output should have a sigmoid appliedx = F.upsample(x, scale_factor=2, mode='nearest')x = F.sigmoid(self.conv5(x))return x

pytorch包含了nn.ConvTranspose2d直接處理的辦法:

# define the NN architecture class ConvAutoencoder(nn.Module):def __init__(self):super(ConvAutoencoder, self).__init__()## encoder layers ### conv layer (depth from 1 --> 16), 3x3 kernelsself.conv1 = nn.Conv2d(1, 16, 3, padding=1) # conv layer (depth from 16 --> 4), 3x3 kernelsself.conv2 = nn.Conv2d(16, 4, 3, padding=1)# pooling layer to reduce x-y dims by two; kernel and stride of 2self.pool = nn.MaxPool2d(2, 2)## decoder layers #### a kernel of 2 and a stride of 2 will increase the spatial dims by 2self.t_conv1 = nn.ConvTranspose2d(4, 16, 2, stride=2)self.t_conv2 = nn.ConvTranspose2d(16, 1, 2, stride=2)def forward(self, x):## encode ### add hidden layers with relu activation function# and maxpooling afterx = F.relu(self.conv1(x))x = self.pool(x)# add second hidden layerx = F.relu(self.conv2(x))x = self.pool(x) # compressed representation## decode ### add transpose conv layers, with relu activation functionx = F.relu(self.t_conv1(x))# output layer (with sigmoid for scaling from 0 to 1)x = F.sigmoid(self.t_conv2(x))return x 與50位技術專家面對面20年技術見證,附贈技術全景圖

總結

以上是生活随笔為你收集整理的深度学总结:CNN Decoder, Upsampling的处理的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。