日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

tf dense layer两种创建方式的对比和numpy实现

發布時間:2024/10/8 编程问答 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 tf dense layer两种创建方式的对比和numpy实现 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

    • 1 Dense Layer
    • 2 對比原始的add layer方法和繼承方法的不同
      • 2.1 global config
      • 2.1 用add實現
      • 2.2 用繼承實現
    • 3 有權重的對比
      • 3.1 用自帶add_weight方法自定義權重
      • 3.2 用add實現
      • 3.3 用繼承實現
    • 4 用自定義矩陣為權重矩陣
      • 4.1 初始化權重和偏移項矩陣
      • 4.2 用add實現
      • 4.3 用繼承實現
    • 5 用numpy實現
      • 5.1 初始化權重和偏移項矩陣
      • 5.2 add實現,有激活函數
      • 5.3 tf 實現
      • 5.4 numpy實現

1 Dense Layer

目的:構建Dense層,對比兩種方法實現的區別

2 對比原始的add layer方法和繼承方法的不同

2.1 global config

import tensorflow as tf from tensorflow import keras from tensorflow.keras.layers import Input, LSTM, Dense import numpy as npnp.random.seed(1)rows = 10000 # 樣本數 columns = 100 # 特征數train_x1 = np.random.random(size=(int(rows/2), columns)) train_y1 = np.random.choice([0], size=(int(rows/2), 1)) train_x2 = np.random.random(size=(int(rows/2), columns))+1 train_y2 = np.random.choice([1], size=(int(rows/2), 1))train_x = np.vstack((train_x1, train_x2)) train_y = np.vstack((train_y1, train_y2))units = 5 # 自定義cell個數

2.1 用add實現

tf.random.set_seed(1) # 固定隨機值model1 = keras.Sequential() model1.add(Input(shape=(columns,))) model1.add(Dense(units=units))model1.compile(optimizer="adam", loss="mse", metrics=["accuracy"])model1.fit(train_x, train_y, epochs=10) model1.predict(train_x)[-1][-1]l1 = model1.layers[0] w1, b1 = l1.get_weights()

api中的參數分幾部分

  • initializer 初始項,初始化參數
  • regularizer 正則項,選擇不同正則模式L1L2
  • constraint 約束項,非負約束或者最大模約束
  • 2.2 用繼承實現

    tf.random.set_seed(1) # 固定隨機值class MyDenseLayer(keras.layers.Layer):def __init__(self, num_outputs):super(MyDenseLayer, self).__init__()self.num_outputs = num_outputsdef build(self, input_shape):self.kernel = self.add_weight(name="kernel", shape=[int(input_shape[-1]), self.num_outputs])self.bias = self.add_weight(name="bias", shape=[self.num_outputs, ], initializer=keras.initializers.zeros)self.build = Truedef call(self, input):return tf.matmul(input, self.kernel) + self.biasmodel2 = keras.Sequential() model2.add(Input(shape=(columns,))) model2.add(MyDenseLayer(units)) model2.compile(loss="mse", optimizer="adam", metrics=['accuracy'])model2.fit(train_x, train_y, epochs=10) model2.predict(train_x)[-1][-1]l2 = model2.layers[0] w2, b2 = l2.get_weights()

    3 有權重的對比

    3.1 用自帶add_weight方法自定義權重

    自定義初始的權重和偏移項。為什么要定義這兩個方法?為了用add_weight方法初始權重和偏移項

    def w_init(shape, dtype=tf.float32):return tf.random.normal(shape=shape, dtype=dtype)def b_init(shape, dtype=tf.float32):return tf.zeros(shape=shape, dtype=dtype)

    3.2 用add實現

    tf.random.set_seed(1) # 固定隨機值model3 = keras.Sequential() model3.add(Input(shape=(columns,))) model3.add(Dense(units=units, kernel_initializer=w_init, bias_initializer=b_init)) # 需要固定weightsmodel3.compile(optimizer="adam", loss="mse", metrics=["accuracy"])model3.fit(train_x, train_y, epochs=10) model3.predict(train_x)[-1][-1]l3 = model3.layers[0] w3, b3 = l3.get_weights()

    3.3 用繼承實現

    tf.random.set_seed(1) # 固定隨機值class MyDenseLayer(keras.layers.Layer):def __init__(self, num_outputs):super(MyDenseLayer, self).__init__()self.num_outputs = num_outputsdef build(self, input_shape):self.kernel = self.add_weight(initializer=w_init, shape=(input_shape[-1], self.num_outputs), dtype=tf.float32) # 自定義權重self.bias = self.add_weight(initializer=b_init, shape=(self.num_outputs,), dtype=tf.float32) # 自定義偏移項def call(self, input):return tf.matmul(input, self.kernel) + self.biasmodel4 = keras.Sequential() model4.add(Input(shape=(columns,))) model4.add(MyDenseLayer(units)) model4.compile(loss="mse", optimizer="adam", metrics=['accuracy'])model4.fit(train_x, train_y, epochs=10) model4.predict(train_x)[-1][-1]l4 = model4.layers[0] w4, b4 = l4.get_weights()

    4 用自定義矩陣為權重矩陣

    4.1 初始化權重和偏移項矩陣

    tf.random.set_seed(1) w = tf.random.normal(shape=(columns, units), dtype=tf.float32) b = tf.zeros(shape=(units,), dtype=tf.float32)def w_init(shape, dtype=tf.float32):return wdef b_init(shape, dtype=tf.float32):return b

    4.2 用add實現

    tf.random.set_seed(1) # 固定隨機值model5 = keras.Sequential() model5.add(Input(shape=(columns,))) model5.add(Dense(units=units, kernel_initializer=w_init, bias_initializer=b_init)) model5.compile(loss="mse", optimizer="adam", metrics=['accuracy'])model5.fit(train_x, train_y, epochs=10) model5.predict(train_x)[-1][-1]

    4.3 用繼承實現

    不用Layer.add_weight方法,自己實現一個權重矩陣,然后用權重矩陣作為初始化,進行訓練

    tf.random.set_seed(1) # 固定隨機值class MyDenseLayer(keras.layers.Layer):def __init__(self, num_outputs):super(MyDenseLayer, self).__init__()self.num_outputs = num_outputsdef build(self, input_shape):self.kernel = tf.Variable(w, trainable=True)self.bias = tf.Variable(b, trainable=True)def call(self, input):return tf.matmul(input, self.kernel) + self.biasmodel6 = keras.Sequential() model6.add(Input(shape=(columns,))) model6.add(MyDenseLayer(units)) model6.compile(loss="mse", optimizer="adam", metrics=['accuracy'])model6.fit(train_x, train_y, epochs=10) model6.predict(train_x)[-1][-1]

    5 用numpy實現

    5.1 初始化權重和偏移項矩陣

    tf.random.set_seed(1)train_x = np.ones(shape=(rows, columns), dtype="float32") # 這里一定要dtype一致,否則numpy與keras計算結果會有差異,我這里統一使用float32 train_y = np.vstack([np.ones(shape=(int(rows/2), 1), dtype="float32"), np.zeros(shape=(int(rows/2),1), dtype="float32")])w = tf.random.normal(shape=(columns, 1), dtype=tf.float32) b = tf.zeros(shape=(1,), dtype=tf.float32)def w_init(shape, dtype=tf.float32):return tf.convert_to_tensor(w, dtype=tf.float32)def b_init(shape, dtype=tf.float32):return tf.convert_to_tensor(b, dtype=tf.float32)

    5.2 add實現,有激活函數

    tf.random.set_seed(1) # 固定隨機值model7 = keras.Sequential() model7.add(Input(shape=(columns,))) model7.add(Dense(units=1, kernel_initializer=w_init, bias_initializer=b_init, activation="sigmoid"))h1 = model7.predict(train_x)model7.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate), metrics=['accuracy'])model7.fit(train_x, train_y, epochs=1, batch_size=rows) #這里要注意batch_size要用BatchGD,因為numpy實現時沒有用batch,用的是全量數據更新w1, b1 = model7.layers[0].weights

    5.3 tf 實現

    tf.random.set_seed(1) # 固定隨機值x = tf.Variable(train_x, dtype=tf.float32) w2 = w b2 = b with tf.GradientTape(persistent=True) as tape:tape.watch([w2, b2])y_pred = 1/(1+tf.math.exp(-1*tf.matmul(x, w2)+b2))loss = tf.math.reduce_mean(tf.math.square(tf.subtract(y_pred, train_y)))dw2 = tape.gradient(target=loss, sources=w2) db2 = tape.gradient(target=loss, sources=b2)w2 = w2 - dw2*learning_rate b2 = b2 - db2*learning_rate

    5.4 numpy實現

    import numpy as npclass MyModel:def __init__(self, w, b, learning_rate):self.w = wself.b = bself.learning_rate = learning_ratedef fit(self, train_x, train_y, epochs, batch_size):self.x = train_xself.y = train_yfor epoch in range(epochs):print(f"epoch {epoch}")self.forward() # 正向傳播self.get_loss()self.backward()def forward(self):self.h3 = self.sigmoid(np.dot(self.x, self.w) + self.b)def backward(self):learning_rate = 0.01dw3 = np.dot(self.x.T, 2*(self.h3 - self.y)*self.h3*(1-self.h3)/train_x.shape[0]) # loss對w的求導db3 = np.dot(np.ones(shape=(1, rows)), 2*(self.h3 - self.y)*self.h3*(1-self.h3)/train_x.shape[0]) # loss對b的求導self.w -= dw3 * learning_rateself.b -= db3 * learning_ratedef sigmoid(self, x):return 1 / (1 + np.exp(-x))def get_loss(self):loss = np.sum((np.square(self.h3-self.y)), axis=0)/rowsprint(f"loss {loss}")def predict(self):passmodel8 = MyModel(w, b, learning_rate) model8.fit(train_x, train_y, epochs=1, batch_size=rows)w3 = model8.w b3 = model8.b 與50位技術專家面對面20年技術見證,附贈技術全景圖

    總結

    以上是生活随笔為你收集整理的tf dense layer两种创建方式的对比和numpy实现的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。