日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > pytorch >内容正文

pytorch

深度学习(09)-- DenseNet

發布時間:2023/12/13 pytorch 59 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学习(09)-- DenseNet 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

  • 目錄
    • 1.DenseNet網絡結構
    • 2.稠密連接及其優點
    • 3.代碼實現
    • 4.補充說明

目錄

1.DenseNet網絡結構



2.稠密連接及其優點

每層以之前層的輸出為輸入,對于有L層的傳統網絡,一共有L個連接,對于DenseNet,則有L*(L+1)/2。

這篇論文主要參考了Highway Networks,Residual Networks (ResNets)以及GoogLeNet,通過加深網絡結構,提升分類結果。

加深網絡結構首先需要解決的是梯度消失問題

解決方案是:盡量縮短前層和后層之間的連接。
比如上圖中,H4層可以直接用到原始輸入信息X0,同時還用到了之前層對X0處理后的信息,這樣能夠最大化信息的流動。
反向傳播過程中,X0的梯度信息包含了損失函數直接對X0的導數,有利于梯度傳播。


DenseNet具體網絡結構:

3.代碼實現

conv block、transition block、Dense block

def conv_block(x, stage, branch, nb_filter, dropout_rate=None, weight_decay=1e-4):"""Apply BatchNorm, Relu, bottleneck 1x1 Conv2D, 3x3 Conv2D, and option dropout# Argumentsx: input tensor stage: index for dense blockbranch: layer index within each dense blocknb_filter: number of filtersdropout_rate: dropout rateweight_decay: weight decay factor"""eps = 1.1e-5conv_name_base = 'conv' + str(stage) + '_' + str(branch)relu_name_base = 'relu' + str(stage) + '_' + str(branch)" 1*1 convolutional (Bottleneck layer)"inter_channel = 4 * nb_filterx = BatchNormalization(epsilon=eps, axis=3, gamma_regularizer=l2(weight_decay),beta_regularizer=l2(weight_decay), name=conv_name_base+'_x1_bn')(x)x = Activation('relu', name=relu_name_base + '_x1')(x)x = Conv2D(filters=inter_channel, kernel_size=(1,1), strides=(1,1), padding='same',kernel_initializer='he_uniform',kernel_regularizer=l2(weight_decay),name=conv_name_base + '_x1')(x)if dropout_rate:x = Dropout(dropout_rate)(x)" 3*3 convolutional"x = BatchNormalization(epsilon=eps, axis=3, gamma_regularizer=l2(weight_decay),beta_regularizer=l2(weight_decay), name=conv_name_base + '_x2_bn')(x)x = Activation('relu', name=relu_name_base + '_x2')(x)x = Conv2D(filters=nb_filter, kernel_size=(3,3), strides=(1,1), padding='same', kernel_initializer='he_uniform',kernel_regularizer=l2(weight_decay),name=conv_name_base + '_x2')(x)if dropout_rate:x = Dropout(dropout_rate)(x)return xdef transition_block(x, stage, nb_filter, compression=1.0, dropout_rate=None, weight_decay=1e-4):"""Apply BatchNorm, 1x1 Convolution, averagePooling, optional compression, dropout # Argumentsx: input tensorstage: index for dense blocknb_filter: number of filterscompression: calculated as 1 - reduction. Reduces the number of feature maps in the transition block.dropout_rate: dropout rateweight_decay: weight decay factor"""eps = 1.1e-5conv_name_base = 'conv' + str(stage) + '_blk'relu_name_base = 'relu' + str(stage) + '_blk'pool_name_base = 'pool' + str(stage) x = BatchNormalization(epsilon=eps, axis=3, name=conv_name_base + '_bn')(x)x = Activation('relu', name=relu_name_base)(x)x = Conv2D(filters=int(nb_filter * compression), kernel_size=(1,1), strides=(1,1), padding='same', name=conv_name_base)(x)if dropout_rate:x = Dropout(dropout_rate)(x)x = AveragePooling2D((2,2), strides=(2,2), name=pool_name_base)(x)return xdef dense_block(x, stage, nb_layers, nb_filter, growth_rate, dropout_rate=None, weight_decay=1e-4, grow_nb_filters=True):"""Build a dense_block where the output of each conv_block is fed to subsequent ones# Argumentsx: input tensorstage: index for dense blocknb_layers: the number of layers of conv_block to append to the model.nb_filter: number of filtersgrowth_rate: growth ratedropout_rate: dropout rateweight_decay: weight decay factorgrow_nb_filters: flag to decide to allow number of filters to grow"""eps = 1.1e-5concat_feat = xfor i in range(nb_layers):branch = i+1x = conv_block(concat_feat, stage, branch, growth_rate, dropout_rate, weight_decay)concat_feat = concatenate([concat_feat, x], axis=3, name='concat_' + str(stage) + '_' + str(branch))if grow_nb_filters:nb_filter += growth_ratereturn concat_feat, nb_filter

DenseNet-BC-121

def DenseNet_BC_121(input_shape=(64,64,3), nb_dense_block=4, growth_rate=32, nb_filter=16,reduction=0.0, dropout_rate=0.0, classes=6, weight_decay=1e-4, weights_path=None):"""Instantiate the DenseNet 121 architecture,# Argumentsnb_dense_block: number of dense blocks to add to endgrowth_rate: number of filters to add per dense blocknb_filter: initial number of filtersreduction: reduction factor of transition blocks.dropout_rate: dropout rateweight_decay: weight decay factorclasses: optional number of classes to classify imagesweights_path: path to pre-trained weights# ReturnsA Keras model instance."""eps = 1.1e-5compression = 1.0 - reductionnb_layers = [6,12,24,16]x_input = Input(input_shape)"Initial convolution"x = Conv2D(filters=nb_filter, kernel_size=(7,7), strides=(1,1), padding='same', name='conv1')(x_input)x = BatchNormalization(epsilon=eps, axis=3, name='conv1_bn')(x)x = Activation('relu', name='relu1')(x)x = MaxPooling2D((3,3), strides=(2,2), padding='same', name='pool1')(x)"Add dense blocks"for block_idx in range(nb_dense_block - 1):stage = block_idx + 2x, nb_filter = dense_block(x, stage, nb_layers[block_idx], nb_filter, growth_rate,dropout_rate=dropout_rate, weight_decay=weight_decay)"Add transition_block"x = transition_block(x, stage, nb_filter, compression=compression, dropout_rate=dropout_rate, weight_decay=weight_decay)nb_filter = int(nb_filter * compression)"the last dense block does not have a transition"final_stage = stage + 1x, nb_filter = dense_block(x, final_stage, nb_layers[-1], nb_filter, growth_rate,dropout_rate=dropout_rate, weight_decay=weight_decay)x = BatchNormalization(epsilon=eps, axis=3, name='conv' + str(final_stage) + 'blk_bn')(x)x = Activation('relu', name='relu' + str(final_stage) + '_blk')(x)x = GlobalAveragePooling2D(name='pool' + str(final_stage))(x)x = Dense(classes, activation='softmax', name='softmax_prob')(x)model = Model(inputs=x_input, outputs=x, name='DenseNet_BC_121')if weights_path is not None:model.load_weights(weights_path)return model

4.補充說明

DenseNet網絡更窄、參數更少


文中還用到dropout操作來隨機減少分支,避免過擬合,畢竟這篇文章的連接確實多。

** 原作者的一些解釋 **


創作挑戰賽新人創作獎勵來咯,堅持創作打卡瓜分現金大獎

總結

以上是生活随笔為你收集整理的深度学习(09)-- DenseNet的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 播放灌醉水嫩大学生国内精品 | 公妇借种乱htp109cc | 国产精品丝袜视频 | 欧美人成在线视频 | 麻豆免费在线观看视频 | 韩国中文字幕在线观看 | v片在线观看 | 国产麻豆一区二区 | 日本性爱视频在线观看 | 亚洲精品少妇一区二区 | a天堂视频| 欧美丰满艳妇bbwbbw | 东北高大丰满bbbbzbbb | 狠狠狠狠狠狠干 | 朝桐光一区二区三区 | 樱桃成人精品视频在线播放 | 日本老熟妇毛茸茸 | 精品国产69 | 黄色成年人网站 | 午夜视频在线瓜伦 | 91原创国产| 国产麻豆精品在线观看 | 日日综合 | 欧美在线亚洲 | 精品福利在线 | 伊人国产在线观看 | 国产成人精品午夜福利Av免费 | 少妇一级淫免费观看 | 日韩成人午夜电影 | 欧美一区二区三区在线观看视频 | 伊人久久国产 | 九九九九九热 | 又粗又猛又爽又黄少妇视频网站 | 久操超碰 | 亚洲精品www久久久久久广东 | 美女靠逼视频网站 | 国产精品亚洲精品 | 美女二区| 交专区videossex非洲 | 色综合天天综合网国产成人网 | 欧美在线aa| 天堂国产一区二区三区 | 亚洲无色 | 午夜在线免费观看视频 | 蜜桃成人在线观看 | av成人天堂 | 97超碰色| 麻豆md0077饥渴少妇 | 明日花绮罗高潮无打码 | 色呦呦一区二区 | 亚洲欧美国产高清va在线播放 | 国产精品一区二区在线观看 | 国产乱人乱偷精品视频a人人澡 | 捆绑无遮挡打光屁股 | av不卡在线观看 | 精品久久久一区 | 大又大又粗又硬又爽少妇毛片 | av不卡一区二区三区 | 国产在线播放91 | 成年人免费在线看 | 在线国产播放 | 四虎影视成人 | 人人涩 | 国产日日日 | 久久精品久久99 | 丁香啪啪综合成人亚洲 | 精品无人区无码乱码毛片国产 | 国产女无套免费视频 | 国产精品国产三级国产专区51区 | 午夜精品久久久久久久99黑人 | 久久人妻少妇嫩草av蜜桃 | 久久国产精品波多野结衣av | 少妇精品无码一区二区三区 | 夜夜干夜夜 | av无码久久久久久不卡网站 | 少妇情理伦片丰满午夜在线观看 | 久久无码精品丰满人妻 | 少妇与公做了夜伦理 | 精品国产乱码久久久久久蜜臀网站 | 一级视频毛片 | 久久久视屏 | 亚洲视频 一区 | 欧美不在线 | 国产视频一区二区不卡 | 制服丝袜手机在线 | 欧美伦理一区二区 | 欧美高跟鞋交xxxxxhd | 污视频在线观看网址 | 色综合久久久无码中文字幕波多 | 丰满人妻一区二区三区53视频 | 欧洲一区在线 | 成人毛片软件 | 久久99精品久久久久子伦 | 国产精品久久久久久av | 亚洲天堂va | 日本在线看 | 午夜成年人视频 | 国产精品黄色网 | 办公室摸腿吻胸激情视频 |