日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Keras框架:resent50代码实现

發(fā)布時(shí)間:2023/11/29 编程问答 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Keras框架:resent50代码实现 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

Residual net概念

概念:

Residual net(殘差網(wǎng)絡(luò)):將靠前若干層的某一層數(shù)據(jù)輸出直接跳過多層引入到后面數(shù)據(jù)層的輸入 部分。
殘差神經(jīng)單元:假定某段神經(jīng)網(wǎng)絡(luò)的輸入是x,期望輸出是H(x),如果我們直接將輸入x傳到輸出作 為初始結(jié)果,那么我們需要學(xué)習(xí)的目標(biāo)就是F(x) = H(x) - x,這就是一個(gè)殘差神經(jīng)單元,相當(dāng)于將 學(xué)習(xí)目標(biāo)改變了,不再是學(xué)習(xí)一個(gè)完整的輸出H(x),只是輸出和輸入的差別 H(x) - x ,即殘差。

細(xì)節(jié):

? 普通的直連的卷積神經(jīng)網(wǎng)絡(luò)和ResNet的最大區(qū)別在 于,ResNet有很多旁路的支線將輸入直接連到后面 的層,使得后面的層可以直接學(xué)習(xí)殘差,這種結(jié)構(gòu)也 被稱為shortcut或skip connections。
? 傳統(tǒng)的卷積層或全連接層在信息傳遞時(shí),或多或少會 存在信息丟失、損耗等問題。ResNet在某種程度上 解決了這個(gè)問題,通過直接將輸入信息繞道傳到輸出, 保護(hù)信息的完整性,整個(gè)網(wǎng)絡(luò)只需要學(xué)習(xí)輸入、輸出 差別的那一部分,簡化了學(xué)習(xí)目標(biāo)和難度。

思路:

ResNet50有兩個(gè)基本的塊,分別名為Conv Block和Identity Block,其中Conv Block輸入和輸出的維度 是不一樣的,所以不能連續(xù)串聯(lián),它的作用是改變網(wǎng)絡(luò)的維度;Identity Block輸入維度和輸出維度相 同,可以串聯(lián),用于加深網(wǎng)絡(luò)的。

resent50代碼實(shí)現(xiàn):

網(wǎng)絡(luò)主體部分:

#-------------------------------------------------------------# # ResNet50的網(wǎng)絡(luò)部分 #-------------------------------------------------------------# from __future__ import print_functionimport numpy as np from keras import layersfrom keras.layers import Input from keras.layers import Dense,Conv2D,MaxPooling2D,ZeroPadding2D,AveragePooling2D from keras.layers import Activation,BatchNormalization,Flatten from keras.models import Modelfrom keras.preprocessing import image import keras.backend as K from keras.utils.data_utils import get_file from keras_applications.imagenet_utils import decode_predictions from keras_applications.imagenet_utils import preprocess_inputdef identity_block(input_tensor, kernel_size, filters, stage, block):filters1, filters2, filters3 = filtersconv_name_base = 'res' + str(stage) + block + '_branch'bn_name_base = 'bn' + str(stage) + block + '_branch'x = Conv2D(filters1, (1, 1), name=conv_name_base + '2a')(input_tensor)x = BatchNormalization(name=bn_name_base + '2a')(x)x = Activation('relu')(x)x = Conv2D(filters2, kernel_size,padding='same', name=conv_name_base + '2b')(x)x = BatchNormalization(name=bn_name_base + '2b')(x)x = Activation('relu')(x)x = Conv2D(filters3, (1, 1), name=conv_name_base + '2c')(x)x = BatchNormalization(name=bn_name_base + '2c')(x)x = layers.add([x, input_tensor])x = Activation('relu')(x)return xdef conv_block(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)):filters1, filters2, filters3 = filtersconv_name_base = 'res' + str(stage) + block + '_branch'bn_name_base = 'bn' + str(stage) + block + '_branch'x = Conv2D(filters1, (1, 1), strides=strides,name=conv_name_base + '2a')(input_tensor)x = BatchNormalization(name=bn_name_base + '2a')(x)x = Activation('relu')(x)x = Conv2D(filters2, kernel_size, padding='same',name=conv_name_base + '2b')(x)x = BatchNormalization(name=bn_name_base + '2b')(x)x = Activation('relu')(x)x = Conv2D(filters3, (1, 1), name=conv_name_base + '2c')(x)x = BatchNormalization(name=bn_name_base + '2c')(x)shortcut = Conv2D(filters3, (1, 1), strides=strides,name=conv_name_base + '1')(input_tensor)shortcut = BatchNormalization(name=bn_name_base + '1')(shortcut)x = layers.add([x, shortcut])x = Activation('relu')(x)return xdef ResNet50(input_shape=[224,224,3],classes=1000):img_input = Input(shape=input_shape)x = ZeroPadding2D((3, 3))(img_input)x = Conv2D(64, (7, 7), strides=(2, 2), name='conv1')(x)x = BatchNormalization(name='bn_conv1')(x)x = Activation('relu')(x)x = MaxPooling2D((3, 3), strides=(2, 2))(x)x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1))x = identity_block(x, 3, [64, 64, 256], stage=2, block='b')x = identity_block(x, 3, [64, 64, 256], stage=2, block='c')x = conv_block(x, 3, [128, 128, 512], stage=3, block='a')x = identity_block(x, 3, [128, 128, 512], stage=3, block='b')x = identity_block(x, 3, [128, 128, 512], stage=3, block='c')x = identity_block(x, 3, [128, 128, 512], stage=3, block='d')x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a')x = identity_block(x, 3, [256, 256, 1024], stage=4, block='b')x = identity_block(x, 3, [256, 256, 1024], stage=4, block='c')x = identity_block(x, 3, [256, 256, 1024], stage=4, block='d')x = identity_block(x, 3, [256, 256, 1024], stage=4, block='e')x = identity_block(x, 3, [256, 256, 1024], stage=4, block='f')x = conv_block(x, 3, [512, 512, 2048], stage=5, block='a')x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b')x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c')x = AveragePooling2D((7, 7), name='avg_pool')(x)x = Flatten()(x)x = Dense(classes, activation='softmax', name='fc1000')(x)model = Model(img_input, x, name='resnet50')model.load_weights("resnet50_weights_tf_dim_ordering_tf_kernels.h5")return modelif __name__ == '__main__':model = ResNet50()model.summary()img_path = 'elephant.jpg'# img_path = 'bike.jpg'img = image.load_img(img_path, target_size=(224, 224))x = image.img_to_array(img)x = np.expand_dims(x, axis=0)x = preprocess_input(x)print('Input image shape:', x.shape)preds = model.predict(x)print('Predicted:', decode_predictions(preds))

總結(jié)

以上是生活随笔為你收集整理的Keras框架:resent50代码实现的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。