日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

keras多输出模型

發(fā)布時間:2025/3/15 编程问答 17 豆豆
生活随笔 收集整理的這篇文章主要介紹了 keras多输出模型 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

Keras多輸入多輸出模型構(gòu)建

  • 1. 多輸出模型構(gòu)建
    • 多輸出模型構(gòu)建
    • 自定義loss函數(shù)
    • 批量訓(xùn)練
    • 調(diào)試
  • 2. 多輸入多輸出模型(上)
    • 多輸入多輸出模型
    • (關(guān)鍵)定義這個具有兩個輸入和輸出的模型:
    • 編譯模型時候分配損失函數(shù)權(quán)重
    • 訓(xùn)練模型
    • 另外一種編譯、訓(xùn)練方式(利用字典)
  • 多輸入多輸出模型(下)--GoogleNet
    • 模型結(jié)構(gòu)
    • Inception模塊
    • 輔助輸出軸
    • 前面的推斷過程
    • 模型構(gòu)建
  • 多輸入多輸出模型示例-3

1. 多輸出模型構(gòu)建

reference from : https://www.jianshu.com/p/6bcd63a0165c

多輸出模型構(gòu)建

使用keras函數(shù)式API構(gòu)建網(wǎng)絡(luò):

# 輸入層 inputs = tf.keras.layers.Input(shape=(64,64,3))# 卷積層及全連接層等相關(guān)層 x = tf.keras.layers.Dense(256, activation=tf.nn.relu)(inputs)# 多輸出,定義兩個輸出,指定名字標(biāo)識 fc_a=tf.keras.layers.Dense(name='fc_a',units=CLASS_NUM,activation=tf.nn.softmax)(x) fc_b=tf.keras.layers.Dense(name='fc_b',units=CLASS_NUM,activation=tf.nn.softmax)(x) # 單輸入多輸出 model = tf.keras.Model(inputs=inputs, outputs=[fc_a, fc_b])# 目標(biāo)函數(shù)定義,需與輸出層名字對應(yīng) losses = {'fc_a': 'categorical_crossentropy','fc_b': 'categorical_crossentropy'}model.compile(optimizer=tf.train.AdamOptimizer(),loss=losses,metrics=['accuracy'])

自定義loss函數(shù)

def loss_a(y_true, y_pred):return tf.keras.losses.categorical_crossentropy(y_true, y_pred)def loss_b(y_true, y_pred):return tf.keras.losses.meas_squared_error(y_true, y_pred)losses = {'fc_a': loss_a,'fc_b': loss_b}model.compile(optimizer=tf.train.AdamOptimizer(),loss=losses,metrics=['accuracy'])

批量訓(xùn)練

# data_generator返回的標(biāo)簽形式要是與多輸出的數(shù)量對應(yīng)的數(shù)組 def data_generator(sample_num, batch_size):while True:max_num = sample_num - (sample_num % batch_size)for i in range(0, max_num, batch_size):...yield (batch_x, [batch_a, batch_b])model.fit_generator(generator=data_generator(sample_num, batch_size),steps_per_epoch=sample_num//batch_size,epoches=EPOCHES,verbose=1)

調(diào)試

在自定義的loss函數(shù)中,是以Sequence的方式來輸入的,如果想調(diào)試查看loss的計算過程中的輸出,直接print是無法打印值的,這是因為tensorflow的每次op都要以sess為基礎(chǔ)來啟動,如果想調(diào)試,可以用eager_execution模式:

import tensorflow.contrib.eager as tfe tfe.enable_eager_execution() np.set_printoptions(threshold=np.nan) # 輸出所有元素

2. 多輸入多輸出模型(上)

reference from : https://blog.csdn.net/weixin_40920290/article/details/80917353

多輸入多輸出模型

主要輸入(main_input): 新聞標(biāo)題本身,即一系列詞語.

輔助輸入(aux_input): 接受額外的數(shù)據(jù),例如新聞標(biāo)題的發(fā)布時間等.

該模型將通過 兩個損失函數(shù) 進(jìn)行監(jiān)督學(xué)習(xí).
較早地在模型中使用主損失函數(shù),是深度學(xué)習(xí)模型的一個良好正則方法.

下面用函數(shù)式API來實現(xiàn)它。

主要輸入:接受新聞標(biāo)題本身,即一個整數(shù)序列(每個整數(shù)編碼一個詞)。這些整數(shù)在 1 到 10000之間(10000個詞的詞匯表),且序列長度為100個詞:

from keras.layers import Input, Embedding, LSTM, Dense from keras.models import Model# 標(biāo)題輸入:接收一個含有 100 個整數(shù)的序列,每個整數(shù)在 1 到 10000 之間。 # 注意我們可以通過傳遞一個 `name` 參數(shù)來命名任何層。 main_input = Input(shape=(100,), dtype='int32', name='main_input')# Embedding 層將輸入序列編碼為一個稠密向量的序列,每個向量維度為 512。 x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input)# LSTM 層把向量序列轉(zhuǎn)換成單個向量,它包含整個序列的上下文信息 lstm_out = LSTM(32)(x)

在這里我們添加輔助損失,使得即使在模型主損失很高的情況下,LSTM層和Embedding層都能被平穩(wěn)地訓(xùn)練:

auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(lstm_out)

此時,我們將輔助輸入數(shù)據(jù)與LSTM層的輸出連接起來,輸入到模型中:

auxiliary_input = Input(shape=(5,), name='aux_input') x = keras.layers.concatenate([lstm_out, auxiliary_input])

再添加剩余的層:

# 堆疊多個全連接網(wǎng)絡(luò)層 x = Dense(64, activation='relu')(x) x = Dense(64, activation='relu')(x) x = Dense(64, activation='relu')(x)# 最后添加主要的邏輯回歸層 main_output = Dense(1, activation='sigmoid', name='main_output')(x)

(關(guān)鍵)定義這個具有兩個輸入和輸出的模型:

model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output])

編譯模型時候分配損失函數(shù)權(quán)重

編譯模型的時候,給 輔助損失 分配一個0.2的權(quán)重.

如果要為不同的輸出指定不同的 loss_weights 或 loss,可以使用列表或字典.
在這里,我們給 loss 參數(shù)傳遞單個損失函數(shù),這個損失將用于所有的輸出。:

model.compile(optimizer='rmsprop', loss='binary_crossentropy',loss_weights=[1., 0.2])

訓(xùn)練模型

我們可以通過傳遞輸入數(shù)組和目標(biāo)數(shù)組的列表來訓(xùn)練模型:

model.fit([headline_data, additional_data], [labels, labels],epochs=50, batch_size=32)

另外一種編譯、訓(xùn)練方式(利用字典)

由于輸入和輸出均被命名了(在定義時傳遞了一個 name 參數(shù)),我們也可以通過以下方式編譯模型:

model.compile(optimizer='rmsprop',loss={'main_output': 'binary_crossentropy', 'aux_output': 'binary_crossentropy'},loss_weights={'main_output': 1., 'aux_output': 0.2})# 然后使用以下方式訓(xùn)練: model.fit({'main_input': headline_data, 'aux_input': additional_data},{'main_output': labels, 'aux_output': labels},epochs=50, batch_size=32)

多輸入多輸出模型(下)–GoogleNet

reference from : https://blog.csdn.net/weixin_40920290/article/details/80925228

模型結(jié)構(gòu)

注意:

  • LocalRespNorm一般不太使用了,取而代之的是BatchNormal
  • 因為圖像數(shù)據(jù)不同,有部分參數(shù)也會發(fā)生改變
  • 我們只實現(xiàn)**GoogelNet(v1)**版本
  • Inception模塊

    def inception(x, filter_size, layer_number):"""由1x1,3x3,5x5以及Maxpool構(gòu)成的Inception模塊Args:x(Tesnsor): 輸入張量filter_size: 卷積核列表,總共有6個卷積核。layer_number: Inception序號Returns:經(jīng)過Inception模塊后的Tensor"""layer_number = str(layer_number)with K.name_scope('Inception_' + layer_number):# 1x1卷積with K.name_scope("conv_1x1"):conv_1x1 = Conv2D(filters=filter_size[0], kernel_size=(1, 1),strides=1, padding='same', activation='relu',kernel_regularizer=l2(L2_RATE),name='conv_1x1' + layer_number)(x)# 3x3 Bottleneck layer(瓶頸模塊)和 3x3 卷積with K.name_scope('conv_3x3'):conv_3x3 = Conv2D(filters=filter_size[1], kernel_size=(1, 1),strides=1, padding='same', activation='relu',kernel_regularizer=l2(L2_RATE),name='conv_3x3_bottleneck' + layer_number)(x)conv_3x3 = Conv2D(filters=filter_size[2], kernel_size=(3, 3),strides=1, padding='same', activation='relu',kernel_regularizer=l2(L2_RATE),name='conv_3x3' + layer_number)(conv_3x3)with K.name_scope('conv_5x5'):# 5x5 Bottleneck layer(瓶頸層)和 5x5 卷積conv_5x5 = Conv2D(filters=filter_size[3], kernel_size=(1, 1),strides=1, padding='same', activation='relu',kernel_regularizer=l2(L2_RATE),name='conv_5x5_bottleneck' + layer_number)(x)conv_5x5 = Conv2D(filters=filter_size[4], kernel_size=(5, 5),strides=1, padding='same', activation='relu',kernel_regularizer=l2(L2_RATE),name='conv_5x5' + layer_number)(conv_5x5)with K.name_scope('Max_Conv'):# Max pooling(最大池化層) 和 Bottleneck layer(瓶頸層)max_pool = MaxPooling2D(pool_size=3, strides=1, padding='same',name='maxpool'+layer_number)(x)max_pool = Conv2D(filters=filter_size[5], kernel_size=(1, 1),strides=1, padding='same', activation='relu',kernel_regularizer=l2(L2_RATE),name='maxpool_conv1x1' + layer_number)(max_pool)with K.name_scope('concatenate'):# 將所有張量拼接在一起x = concatenate([conv_1x1, conv_3x3, conv_5x5, max_pool], axis=-1)# high/width上相同,channels拼接在一起return x

    輔助輸出軸

    def aux_classifier(x, filter_size, layer_number):"""輔助軸,輸出softmax分類結(jié)果Args:x: 輸入張量filter_size: 卷積核列表,長度為3layer_number: 序列Returns: 輔助軸分類結(jié)果"""layer_number = str(layer_number)with K.name_scope('aux_ckassifier'+layer_number):# 平均池化層x = AveragePooling2D(pool_size=3, strides=2, padding='same',name='AveragePooling2D'+layer_number)(x)# (0)1x1 卷積層x = Conv2D(filters=filter_size[0], kernel_size=1, strides=1,padding='valid', activation='relu',kernel_regularizer=l2(L2_RATE),name='aux_conv' + layer_number)(x)# 展平x = Flatten()(x)# (1)全連接層1x = Dense(units=filter_size[1], activation='relu',kernel_regularizer=l2(L2_RATE),name='aux_dense1_' + layer_number)(x)x = Dropout(0.7)(x)# (3)softmax輸出層x = Dense(units=NUM_CLASS, activation='softmax',kernel_regularizer=l2(L2_RATE),name='aux_output' + layer_number)(x)return x

    前面的推斷過程

    def front(x, filter_size):# (0)conv2dx = Conv2D(filters=filter_size[0], kernel_size=5, strides=1,padding='same', activation='relu',kernel_regularizer=l2(L2_RATE))(x)x = MaxPooling2D(pool_size=3, strides=2, padding='same')(x)x = BatchNormalization(axis=-1)(x)# (1)conv2dx = Conv2D(filters=filter_size[1], kernel_size=1, strides=1,padding='same', activation='relu',kernel_regularizer=l2(L2_RATE))(x)# (2)conv2dx = Conv2D(filters=filter_size[2], kernel_size=3, strides=1,padding='same', activation='relu',kernel_regularizer=l2(L2_RATE))(x)x = BatchNormalization(axis=-1)(x)x = MaxPooling2D(pool_size=3,strides=1,padding='same')(x)return x

    模型構(gòu)建

    知識點

  • 多輸入多輸出模型
  • GoogelNet(V1)
  • K.name_scope()
  • Note

    • 模型的參數(shù)不完全一樣,因為輸入的是cifar-10數(shù)據(jù)集
    • 訓(xùn)練的時候沒有用生成器,也沒有圖像增強(qiáng); 因為多輸入多輸出模型利用fit_generator比較麻煩,可以自己百度一下,在這里就不用了
    • 利用TensorBoard查看計算圖以及訓(xùn)練進(jìn)程
    • 并沒有去保存訓(xùn)練模型等操作
    • K.name_scope是可以用的,利用它組織好網(wǎng)絡(luò)結(jié)構(gòu),使用時只需要訓(xùn)練的時候?qū)ensorBoard回調(diào)對象傳回去
    import keras.backend as K from keras.datasets import cifar10 from keras.layers import Input, Dense, Conv2D, MaxPooling2D, AveragePooling2D from keras.layers import concatenate, BatchNormalization, Flatten, Dropout from keras.regularizers import l2 from keras.utils import to_categorical from keras.models import Model from keras.callbacks import TensorBoard from keras.optimizers import AdamL2_RATE = 0.002 NUM_CLASS = 10 BATCH_SIZE = 128 EPOCH = 10def inception(x, filter_size, layer_number):"""由1x1,3x3,5x5以及Maxpool構(gòu)成的Inception模塊Args:x(Tesnsor): 輸入張量filter_size: 卷積核列表,總共有6個卷積核。layer_number: Inception序號Returns:經(jīng)過Inception模塊后的Tensor"""layer_number = str(layer_number)with K.name_scope('Inception_' + layer_number):# 1x1卷積with K.name_scope("conv_1x1"):conv_1x1 = Conv2D(filters=filter_size[0], kernel_size=(1, 1),strides=1, padding='same', activation='relu',kernel_regularizer=l2(L2_RATE),name='conv_1x1' + layer_number)(x)# 3x3 Bottleneck layer(瓶頸模塊)和 3x3 卷積with K.name_scope('conv_3x3'):conv_3x3 = Conv2D(filters=filter_size[1], kernel_size=(1, 1),strides=1, padding='same', activation='relu',kernel_regularizer=l2(L2_RATE),name='conv_3x3_bottleneck' + layer_number)(x)conv_3x3 = Conv2D(filters=filter_size[2], kernel_size=(3, 3),strides=1, padding='same', activation='relu',kernel_regularizer=l2(L2_RATE),name='conv_3x3' + layer_number)(conv_3x3)with K.name_scope('conv_5x5'):# 5x5 Bottleneck layer(瓶頸層)和 5x5 卷積conv_5x5 = Conv2D(filters=filter_size[3], kernel_size=(1, 1),strides=1, padding='same', activation='relu',kernel_regularizer=l2(L2_RATE),name='conv_5x5_bottleneck' + layer_number)(x)conv_5x5 = Conv2D(filters=filter_size[4], kernel_size=(5, 5),strides=1, padding='same', activation='relu',kernel_regularizer=l2(L2_RATE),name='conv_5x5' + layer_number)(conv_5x5)with K.name_scope('Max_Conv'):# Max pooling(最大池化層) 和 Bottleneck layer(瓶頸層)max_pool = MaxPooling2D(pool_size=3, strides=1, padding='same',name='maxpool'+layer_number)(x)max_pool = Conv2D(filters=filter_size[5], kernel_size=(1, 1),strides=1, padding='same', activation='relu',kernel_regularizer=l2(L2_RATE),name='maxpool_conv1x1' + layer_number)(max_pool)with K.name_scope('concatenate'):# 將所有張量拼接在一起x = concatenate([conv_1x1, conv_3x3, conv_5x5, max_pool], axis=-1) # high/width上相同,channels拼接在一起return xdef aux_classifier(x, filter_size, layer_number):"""輔助軸,輸出softmax分類結(jié)果Args:x: 輸入張量filter_size: 卷積核列表,長度為3layer_number: 序列Returns:"""layer_number = str(layer_number)with K.name_scope('aux_ckassifier'+layer_number):# 平均池化層x = AveragePooling2D(pool_size=3, strides=2, padding='same',name='AveragePooling2D'+layer_number)(x)# (0)1x1 卷積層x = Conv2D(filters=filter_size[0], kernel_size=1, strides=1,padding='valid', activation='relu',kernel_regularizer=l2(L2_RATE),name='aux_conv' + layer_number)(x)# 展平x = Flatten()(x)# (1)全連接層1x = Dense(units=filter_size[1], activation='relu',kernel_regularizer=l2(L2_RATE),name='aux_dense1_' + layer_number)(x)x = Dropout(0.7)(x)# (3)softmax輸出層x = Dense(units=NUM_CLASS, activation='softmax',kernel_regularizer=l2(L2_RATE),name='aux_output' + layer_number)(x)return xdef front(x, filter_size):# (0)conv2dx = Conv2D(filters=filter_size[0], kernel_size=5, strides=1,padding='same', activation='relu',kernel_regularizer=l2(L2_RATE))(x)x = MaxPooling2D(pool_size=3, strides=2, padding='same')(x)x = BatchNormalization(axis=-1)(x)# (1)conv2dx = Conv2D(filters=filter_size[1], kernel_size=1, strides=1,padding='same', activation='relu',kernel_regularizer=l2(L2_RATE))(x)# (2)conv2dx = Conv2D(filters=filter_size[2], kernel_size=3, strides=1,padding='same', activation='relu',kernel_regularizer=l2(L2_RATE))(x)x = BatchNormalization(axis=-1)(x)x = MaxPooling2D(pool_size=3, strides=1, padding='same')(x)return xdef load():(x_train, y_train), (x_test, y_test) = cifar10.load_data()y_train = to_categorical(y_train, NUM_CLASS)y_test = to_categorical(y_test, NUM_CLASS)x_train = x_train.astype('float32') / 255x_test = x_test.astype('float32') / 255return x_train, y_train, x_test, y_testdef googlenet_model():# 搭建模型X_Input = Input(shape=input_shape, name='Input')X = front(X_Input, [64, 64, 192])# Inception_0X = inception(X, filter_size=[64, 96, 128, 16, 32, 32], layer_number=0)# Inception_1X = inception(X, [128, 128, 192, 32, 96, 64], layer_number=1)X = MaxPooling2D(pool_size=3, strides=2, padding='same')(X)# Inception_2X = inception(X, [192, 96, 208, 16, 48, 64], layer_number=2)# aux1aux_output_1 = aux_classifier(X, [128, 1024], layer_number=1)# Inception_3X = inception(X, [160, 112, 225, 24, 64, 64], layer_number=3)# Inception_4X = inception(X, [128, 128, 256, 24, 64, 64], layer_number=4)# Incepetion_5X = inception(X, [112, 144, 288, 32, 64, 64], layer_number=5)# aux2aux_output_2 = aux_classifier(X, [128, 1024], layer_number=2)# Inception_6X = inception(X, [256, 160, 320, 32, 128, 128], layer_number=6)X = MaxPooling2D(pool_size=3, strides=2, padding='same')(X)# Inception_7X = inception(X, [256, 160, 320, 32, 128, 128], layer_number=7)# Inception_8X = inception(X, [386, 192, 384, 48, 128, 128], layer_number=8)# 輸出模型X = AveragePooling2D(pool_size=4, strides=1, padding='valid')(X)X = Flatten()(X)X = Dropout(0.4)(X)main_output = Dense(NUM_CLASS, activation='softmax', kernel_regularizer=l2(L2_RATE))(X)# 定義多輸入多輸出模型model = Model(inputs=X_Input, outputs=[main_output, aux_output_1, aux_output_2])return modelif __name__ == '__main__':x_train, y_train, x_test, y_test = load()input_shape = x_train.shape[1:]# 創(chuàng)建模型GoogleNet = googlenet_model()optimizer = Adam(epsilon=1e-08)GoogleNet.compile(optimizer=optimizer, loss='categorical_crossentropy',metrics=['accuracy'], loss_weights=[1, 0.3, 0.3])GoogleNet.summary()tfck = TensorBoard(log_dir='logs/GoogleNet')GoogleNet.fit(x=x_train, y=[y_train, y_train, y_train], validation_data=(x_test, [y_test, y_test, y_test]),epochs=EPOCH, callbacks=[tfck], batch_size=BATCH_SIZE)

    多輸入多輸出模型示例-3

    reference from : https://blog.csdn.net/zzc15806/article/details/84067017

    假設(shè)我們需要搭建如下的模型,輸入數(shù)據(jù)分別為100維和50維的向量,輸出為0或1:

    from keras.layers import Conv1D, Dense, MaxPool1D, concatenate, Flatten from keras import Input, Model from keras.utils import plot_model import numpy as npdef multi_input_model():"""構(gòu)建多輸入模型"""input1_= Input(shape=(100, 1), name='input1')input2_ = Input(shape=(50, 1), name='input2')x1 = Conv1D(16, kernel_size=3, strides=1, activation='relu', padding='same')(input1_)x1 = MaxPool1D(pool_size=10, strides=10)(x1)x2 = Conv1D(16, kernel_size=3, strides=1, activation='relu', padding='same')(input2_)x2 = MaxPool1D(pool_size=5, strides=5)(x2)x = concatenate([x1, x2])x = Flatten()(x)x = Dense(10, activation='relu')(x)output_ = Dense(1, activation='sigmoid', name='output')(x)model = Model(inputs=[input1_, input2_], outputs=[output_])model.summary()return modelif __name__ == '__main__':# 產(chǎn)生訓(xùn)練數(shù)據(jù)x1 = np.random.rand(100, 100, 1)x2 = np.random.rand(100, 50, 1)# 產(chǎn)生標(biāo)簽y = np.random.randint(0, 2, (100,))model = multi_input_model()# 保存模型圖plot_model(model, 'Multi_input_model.png')model.compile(optimizer='adam', loss='binary_crossentropy')model.fit([x1, x2], y, epochs=10, batch_size=10)

    總結(jié)

    以上是生活随笔為你收集整理的keras多输出模型的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。