日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 >

使用tf.keras搭建mnist手写数字识别网络

發(fā)布時(shí)間:2024/4/15 40 豆豆
生活随笔 收集整理的這篇文章主要介紹了 使用tf.keras搭建mnist手写数字识别网络 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

使用tf.keras搭建mnist手寫數(shù)字識別網(wǎng)絡(luò)

目錄

使用tf.keras搭建mnist手寫數(shù)字識別網(wǎng)絡(luò)

1.使用tf.keras.Sequential搭建序列模型

1.1?tf.keras.Sequential?模型

1.2?搭建mnist手寫數(shù)字識別網(wǎng)絡(luò)(序列模型)

1.3?完整的訓(xùn)練代碼

2.構(gòu)建高級模型

2.1 函數(shù)式 API

2.2?搭建mnist手寫數(shù)字識別網(wǎng)絡(luò)(函數(shù)式 API)

2.3?完整的訓(xùn)練代碼

3.?tf.keras高級應(yīng)用:回調(diào)?tf.keras.callbacks.Callback

4.?tf.keras高級應(yīng)用:自定義層tf.keras.layers.Layer


1.使用tf.keras.Sequential搭建序列模型

1.1?tf.keras.Sequential?模型

在 Keras 中,您可以通過組合層來構(gòu)建模型。模型(通常)是由層構(gòu)成的圖。最常見的模型類型是層的堆疊:tf.keras.Sequential?模型。要構(gòu)建一個(gè)簡單的全連接網(wǎng)絡(luò)(即多層感知器),請運(yùn)行以下代碼:

model = keras.Sequential() # Adds a densely-connected layer with 64 units to the model: model.add(keras.layers.Dense(64, activation='relu')) # Add another: model.add(keras.layers.Dense(64, activation='relu')) # Add a softmax layer with 10 output units: model.add(keras.layers.Dense(10, activation='softmax'))

圖文例子:?

1.2?搭建mnist手寫數(shù)字識別網(wǎng)絡(luò)(序列模型)

def mnist_cnn(input_shape):'''構(gòu)建一個(gè)CNN網(wǎng)絡(luò)模型:param input_shape: 指定輸入維度:return:'''model=keras.Sequential()model.add(keras.layers.Conv2D(filters=32,kernel_size = 5,strides = (1,1),padding = 'same',activation = tf.nn.relu,input_shape = input_shape))model.add(keras.layers.MaxPool2D(pool_size=(2,2), strides = (2,2), padding = 'valid'))model.add(keras.layers.Conv2D(filters=64,kernel_size = 3,strides = (1,1),padding = 'same',activation = tf.nn.relu))model.add(keras.layers.MaxPool2D(pool_size=(2,2), strides = (2,2), padding = 'valid'))model.add(keras.layers.Dropout(0.25))model.add(keras.layers.Flatten())model.add(keras.layers.Dense(units=128,activation = tf.nn.relu))model.add(keras.layers.Dropout(0.5))model.add(keras.layers.Dense(units=10,activation = tf.nn.softmax))return model

1.3?完整的訓(xùn)練代碼

# -*-coding: utf-8 -*- """@Project: tensorflow-yolov3@File : keras_mnist.py@Author : panjq@E-mail : pan_jinquan@163.com@Date : 2019-01-31 09:30:12 """ import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import numpy as np mnist=keras.datasets.mnistdef get_train_val(mnist_path):# mnist下載地址:https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz(train_images, train_labels), (test_images, test_labels) = mnist.load_data(mnist_path)print("train_images nums:{}".format(len(train_images)))print("test_images nums:{}".format(len(test_images)))return train_images, train_labels, test_images, test_labelsdef show_mnist(images,labels):for i in range(25):plt.subplot(5,5,i+1)plt.xticks([])plt.yticks([ ])plt.grid(False)plt.imshow(images[i],cmap=plt.cm.gray)plt.xlabel(str(labels[i]))plt.show()def one_hot(labels):onehot_labels=np.zeros(shape=[len(labels),10])for i in range(len(labels)):index=labels[i]onehot_labels[i][index]=1return onehot_labelsdef mnist_net(input_shape):'''構(gòu)建一個(gè)簡單的全連接層網(wǎng)絡(luò)模型:輸入層為28x28=784個(gè)輸入節(jié)點(diǎn)隱藏層120個(gè)節(jié)點(diǎn)輸出層10個(gè)節(jié)點(diǎn):param input_shape: 指定輸入維度:return:'''model = keras.Sequential()model.add(keras.layers.Flatten(input_shape=input_shape)) #輸出層model.add(keras.layers.Dense(units=120, activation=tf.nn.relu)) #隱含層model.add(keras.layers.Dense(units=10, activation=tf.nn.softmax))#輸出層return modeldef mnist_cnn(input_shape):'''構(gòu)建一個(gè)CNN網(wǎng)絡(luò)模型:param input_shape: 指定輸入維度:return:'''model=keras.Sequential()model.add(keras.layers.Conv2D(filters=32,kernel_size = 5,strides = (1,1),padding = 'same',activation = tf.nn.relu,input_shape = input_shape))model.add(keras.layers.MaxPool2D(pool_size=(2,2), strides = (2,2), padding = 'valid'))model.add(keras.layers.Conv2D(filters=64,kernel_size = 3,strides = (1,1),padding = 'same',activation = tf.nn.relu))model.add(keras.layers.MaxPool2D(pool_size=(2,2), strides = (2,2), padding = 'valid'))model.add(keras.layers.Dropout(0.25))model.add(keras.layers.Flatten())model.add(keras.layers.Dense(units=128,activation = tf.nn.relu))model.add(keras.layers.Dropout(0.5))model.add(keras.layers.Dense(units=10,activation = tf.nn.softmax))return modeldef trian_model(train_images,train_labels,test_images,test_labels):#?re-scale?to?0~1.0之間train_images=train_images/255.0test_images=test_images/255.0# mnist數(shù)據(jù)轉(zhuǎn)換為四維train_images=np.expand_dims(train_images,axis = 3)test_images=np.expand_dims(test_images,axis = 3)print("train_images :{}".format(train_images.shape))print("test_images :{}".format(test_images.shape))train_labels=one_hot(train_labels)test_labels=one_hot(test_labels)# 建立模型# model = mnist_net(input_shape=(28,28))model=mnist_cnn(input_shape=(28,28,1))model.compile(optimizer=tf.train.AdamOptimizer(),loss="categorical_crossentropy",metrics=['accuracy'])model.fit(x=train_images,y=train_labels,epochs=5)test_loss,test_acc=model.evaluate(x=test_images,y=test_labels)print("Test?Accuracy?%.2f"%test_acc)#?開始預(yù)測cnt=0predictions=model.predict(test_images)for i in range(len(test_images)):target=np.argmax(predictions[i])label=np.argmax(test_labels[i])if target==label:cnt +=1print("correct?prediction?of?total?:?%.2f"%(cnt/len(test_images)))model.save('mnist-model.h5')if __name__=="__main__":mnist_path = 'D:/MyGit/tensorflow-yolov3/data/mnist.npz'train_images, train_labels, test_images, test_labels=get_train_val(mnist_path)# show_mnist(train_images, train_labels)trian_model(train_images, train_labels, test_images, test_labels)

2.構(gòu)建高級模型

2.1 函數(shù)式 API

tf.keras.Sequential?模型是層的簡單堆疊,無法表示任意模型。使用?Keras 函數(shù)式 API?可以構(gòu)建復(fù)雜的模型拓?fù)?#xff0c;例如:

多輸入模型,
多輸出模型,
具有共享層的模型(同一層被調(diào)用多次),
具有非序列數(shù)據(jù)流的模型(例如,剩余連接)

使用函數(shù)式 API 構(gòu)建的模型具有以下特征:

層實(shí)例可調(diào)用并返回張量。
輸入張量和輸出張量用于定義?tf.keras.Model?實(shí)例。
此模型的訓(xùn)練方式和?Sequential?模型一樣。

以下示例使用函數(shù)式 API 構(gòu)建一個(gè)簡單的全連接網(wǎng)絡(luò):

inputs = keras.Input(shape=(32,)) ?# Returns a placeholder tensor # A layer instance is callable on a tensor, and returns a tensor. x = keras.layers.Dense(64, activation='relu')(inputs) x = keras.layers.Dense(64, activation='relu')(x) predictions = keras.layers.Dense(10, activation='softmax')(x)# Instantiate the model given inputs and outputs. model = keras.Model(inputs=inputs, outputs=predictions)# The compile step specifies the training configuration. model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),loss='categorical_crossentropy',metrics=['accuracy'])# Trains for 5 epochs model.fit(data, labels, batch_size=32, epochs=5)

2.2?搭建mnist手寫數(shù)字識別網(wǎng)絡(luò)(函數(shù)式 API)

def mnist_cnn():"""使用keras定義mnist模型"""# define a truncated_normal initializertn_init = keras.initializers.truncated_normal(0, 0.1, SEED, dtype=tf.float32)# define a constant initializerconst_init = keras.initializers.constant(0.1, tf.float32)# define a L2 regularizerl2_reg = keras.regularizers.l2(5e-4)""" 輸入占位符。如果輸入圖像的shape是(28, 28, 1),輸入的一批圖像(16張圖)的shape是(16, 28, 28, 1);那么,在定義Input時(shí),shape參數(shù)只需要一張圖像的大小,也就是(28, 28, 1),而不是(16, 28, 28, 1)。input placeholder. the Input's parameter shape should be a image's shape (28, 28, 1) rather than a batch of image's shape (16, 28, 28, 1)."""# inputs: shape(None, 28, 28, 1)inputs = layers.Input(shape=(IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS), dtype=tf.float32)"""卷積,輸出shape為(None, 28,18,32)。Conv2D的第一個(gè)參數(shù)為卷積核個(gè)數(shù);第二個(gè)參數(shù)為卷積核大小,和tensorflow不同的是,卷積核的大小只需指定卷積窗口的大小,例如在tensorflow中,卷積核的大小為(BATCH_SIZE, 5, 5, 1),那么在Keras中,只需指定卷積窗口的大小(5, 5),最后一維的大小會(huì)根據(jù)之前輸入的形狀自動(dòng)推算,假如上一層的shape為(None, 28, 28, 1),那么最后一維的大小為1;第三個(gè)參數(shù)為strides,和上一個(gè)參數(shù)同理。其他參數(shù)可查閱Keras的官方文檔。"""# conv1: shape(None, 28, 28, 32)conv1 = layers.Conv2D(32, (5, 5), strides=(1, 1), padding='same',activation='relu', use_bias=True,kernel_initializer=tn_init, name='conv1')(inputs)# pool1: shape(None, 14, 14, 32)pool1 = layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2), padding='same', name='pool1')(conv1)# conv2: shape(None, 14, 14, 64)conv2 = layers.Conv2D(64, (5, 5), strides=(1, 1), padding='same',activation='relu', use_bias=True,kernel_initializer=tn_init,bias_initializer=const_init, name='conv2')(pool1)# pool2: shape(None, 7, 7, 64)pool2 = layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2), padding='same', name='pool2')(conv2)# flatten: shape(None, 3136)flatten = layers.Flatten(name='flatten')(pool2)# fc1: shape(None, 512)fc1 = layers.Dense(512, 'relu', True, kernel_initializer=tn_init,bias_initializer=const_init, kernel_regularizer=l2_reg,bias_regularizer=l2_reg, name='fc1')(flatten)# dropoutdropout1 = layers.Dropout(0.5, seed=SEED)(fc1)# dense2: shape(None, 10)fc2 = layers.Dense(NUM_LABELS, activation=None, use_bias=True,kernel_initializer=tn_init, bias_initializer=const_init, name='fc2',kernel_regularizer=l2_reg, bias_regularizer=l2_reg)(dropout1)# softmax: shape(None, 10)softmax = layers.Softmax(name='softmax')(fc2)# make new modelmodel = keras.Model(inputs=inputs, outputs=softmax, name='nmist')return model

2.3?完整的訓(xùn)練代碼

# -*-coding: utf-8 -*- """@Project: tensorflow-yolov3@File : mnist_cnn2.py@Author : panjq@E-mail : pan_jinquan@163.com@Date : 2019-01-31 10:59:33 """# 從tensorflow里導(dǎo)入keras和keras.layer from tensorflow import keras from tensorflow.keras import layers import tensorflow as tf import matplotlib.pyplot as plt import numpy as np mnist=keras.datasets.mnist# 圖像的大小 IMAGE_SIZE = 28 # 圖像的通道數(shù),為1,即為灰度圖像 NUM_CHANNELS = 1 # 圖像想素值的范圍 PIXEL_DEPTH = 255 # 分類數(shù)目,0~9總共有10類 NUM_LABELS = 10 # 驗(yàn)證集大小 VALIDATION_SIZE = 5000 # Size of the validation set. # 種子 SEED = 66478 # Set to None for random seed. # 批次大小 BATCH_SIZE = 64 # 訓(xùn)練多少個(gè)epoch NUM_EPOCHS = 10 EVAL_BATCH_SIZE = 64def mnist_cnn():"""使用keras定義mnist模型"""# define a truncated_normal initializertn_init = keras.initializers.truncated_normal(0, 0.1, SEED, dtype=tf.float32)# define a constant initializerconst_init = keras.initializers.constant(0.1, tf.float32)# define a L2 regularizerl2_reg = keras.regularizers.l2(5e-4)""" 輸入占位符。如果輸入圖像的shape是(28, 28, 1),輸入的一批圖像(16張圖)的shape是(16, 28, 28, 1);那么,在定義Input時(shí),shape參數(shù)只需要一張圖像的大小,也就是(28, 28, 1),而不是(16, 28, 28, 1)。input placeholder. the Input's parameter shape should be a image's shape (28, 28, 1) rather than a batch of image's shape (16, 28, 28, 1)."""# inputs: shape(None, 28, 28, 1)inputs = layers.Input(shape=(IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS), dtype=tf.float32)"""卷積,輸出shape為(None, 28,18,32)。Conv2D的第一個(gè)參數(shù)為卷積核個(gè)數(shù);第二個(gè)參數(shù)為卷積核大小,和tensorflow不同的是,卷積核的大小只需指定卷積窗口的大小,例如在tensorflow中,卷積核的大小為(BATCH_SIZE, 5, 5, 1),那么在Keras中,只需指定卷積窗口的大小(5, 5),最后一維的大小會(huì)根據(jù)之前輸入的形狀自動(dòng)推算,假如上一層的shape為(None, 28, 28, 1),那么最后一維的大小為1;第三個(gè)參數(shù)為strides,和上一個(gè)參數(shù)同理。其他參數(shù)可查閱Keras的官方文檔。"""# conv1: shape(None, 28, 28, 32)conv1 = layers.Conv2D(32, (5, 5), strides=(1, 1), padding='same',activation='relu', use_bias=True,kernel_initializer=tn_init, name='conv1')(inputs)# pool1: shape(None, 14, 14, 32)pool1 = layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2), padding='same', name='pool1')(conv1)# conv2: shape(None, 14, 14, 64)conv2 = layers.Conv2D(64, (5, 5), strides=(1, 1), padding='same',activation='relu', use_bias=True,kernel_initializer=tn_init,bias_initializer=const_init, name='conv2')(pool1)# pool2: shape(None, 7, 7, 64)pool2 = layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2), padding='same', name='pool2')(conv2)# flatten: shape(None, 3136)flatten = layers.Flatten(name='flatten')(pool2)# fc1: shape(None, 512)fc1 = layers.Dense(512, 'relu', True, kernel_initializer=tn_init,bias_initializer=const_init, kernel_regularizer=l2_reg,bias_regularizer=l2_reg, name='fc1')(flatten)# dropoutdropout1 = layers.Dropout(0.5, seed=SEED)(fc1)# dense2: shape(None, 10)fc2 = layers.Dense(NUM_LABELS, activation=None, use_bias=True,kernel_initializer=tn_init, bias_initializer=const_init, name='fc2',kernel_regularizer=l2_reg, bias_regularizer=l2_reg)(dropout1)# softmax: shape(None, 10)softmax = layers.Softmax(name='softmax')(fc2)# make new modelmodel = keras.Model(inputs=inputs, outputs=softmax, name='nmist')return modeldef get_train_val(mnist_path):# mnist下載地址:https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz(train_images, train_labels), (test_images, test_labels) = mnist.load_data(mnist_path)print("train_images nums:{}".format(len(train_images)))print("test_images nums:{}".format(len(test_images)))return train_images, train_labels, test_images, test_labelsdef show_mnist(images,labels):for i in range(25):plt.subplot(5,5,i+1)plt.xticks([])plt.yticks([ ])plt.grid(False)plt.imshow(images[i],cmap=plt.cm.gray)plt.xlabel(str(labels[i]))plt.show()def one_hot(labels):onehot_labels=np.zeros(shape=[len(labels),10])for i in range(len(labels)):index=labels[i]onehot_labels[i][index]=1return onehot_labelsdef trian_model(train_images,train_labels,test_images,test_labels):#?re-scale?to?0~1.0之間train_images=train_images/255.0test_images=test_images/255.0# mnist數(shù)據(jù)轉(zhuǎn)換為四維train_images=np.expand_dims(train_images,axis = 3)test_images=np.expand_dims(test_images,axis = 3)print("train_images :{}".format(train_images.shape))print("test_images :{}".format(test_images.shape))train_labels=one_hot(train_labels)test_labels=one_hot(test_labels)# 建立模型model=mnist_cnn()model.compile(optimizer=tf.train.AdamOptimizer(),loss="categorical_crossentropy",metrics=['accuracy'])model.fit(x=train_images,y=train_labels,epochs=5)test_loss,test_acc=model.evaluate(x=test_images,y=test_labels)print("Test?Accuracy?%.2f"%test_acc)#?開始預(yù)測cnt=0predictions=model.predict(test_images)for i in range(len(test_images)):target=np.argmax(predictions[i])label=np.argmax(test_labels[i])if target==label:cnt +=1print("correct?prediction?of?total?:?%.2f"%(cnt/len(test_images)))model.save('mnist-model.h5')if __name__=="__main__":mnist_path = 'D:/MyGit/tensorflow-yolov3/data/mnist.npz'train_images, train_labels, test_images, test_labels=get_train_val(mnist_path)# show_mnist(train_images, train_labels)trian_model(train_images, train_labels, test_images, test_labels)

3.?tf.keras高級應(yīng)用:回調(diào)?tf.keras.callbacks.Callback

回調(diào)是傳遞給模型的對象,用于在訓(xùn)練期間自定義該模型并擴(kuò)展其行為。您可以編寫自定義回調(diào),也可以使用包含以下方法的內(nèi)置?tf.keras.callbacks:

tf.keras.callbacks.ModelCheckpoint:定期保存模型的檢查點(diǎn)。 tf.keras.callbacks.LearningRateScheduler:動(dòng)態(tài)更改學(xué)習(xí)速率。 tf.keras.callbacks.EarlyStopping:在驗(yàn)證效果不再改進(jìn)時(shí)中斷訓(xùn)練。 tf.keras.callbacks.TensorBoard:使用?TensorBoard?監(jiān)控模型的行為。

要使用?tf.keras.callbacks.Callback,請將其傳遞給模型的?fit?方法:

callbacks = [# Interrupt training if `val_loss` stops improving for over 2 epochskeras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),# Write TensorBoard logs to `./logs` directorykeras.callbacks.TensorBoard(log_dir='./logs') ] model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,validation_data=(val_data, val_targets))

將mnist訓(xùn)練代碼增加回調(diào)打印模型的信息:

def trian_model(train_images,train_labels,test_images,test_labels):#?re-scale?to?0~1.0之間train_images=train_images/255.0test_images=test_images/255.0# mnist數(shù)據(jù)轉(zhuǎn)換為四維train_images=np.expand_dims(train_images,axis = 3)test_images=np.expand_dims(test_images,axis = 3)print("train_images :{}".format(train_images.shape))print("test_images :{}".format(test_images.shape))train_labels=one_hot(train_labels)test_labels=one_hot(test_labels)# 建立模型model=mnist_cnn()# 打印模型的信息model.summary()# 編譯模型;第一個(gè)參數(shù)是優(yōu)化器;第二個(gè)參數(shù)為loss,因?yàn)槭嵌嘣诸悊栴},固為# 'categorical_crossentropy';第三個(gè)參數(shù)為metrics,就是在訓(xùn)練的時(shí)候需# 要監(jiān)控的指標(biāo)列表。# compile modelmodel.compile(optimizer=tf.train.AdamOptimizer(),loss="categorical_crossentropy",metrics=['accuracy'])# model.compile(optimizer=keras.optimizers.SGD(lr=0.01, momentum=0.9, decay=1e-5),loss='categorical_crossentropy', metrics=['accuracy'])# 設(shè)置回調(diào)# setting callbackscallbacks = [# 把TensorBoard的日志寫入文件夾'./logs'# write TensorBoard' logs to directory 'logs'keras.callbacks.TensorBoard(log_dir='./logs'),]# 開始訓(xùn)練# start trainingmodel.fit(train_images, train_labels, BATCH_SIZE, epochs=5,validation_data=(test_images, test_labels), callbacks=callbacks)# evaluateprint('', 'evaluating on test sets...')loss, accuracy = model.evaluate(test_images, test_labels)print('test loss:', loss)print('test Accuracy:', accuracy)# save modelmodel.save('mnist-model.h5')

4.?tf.keras高級應(yīng)用:自定義層tf.keras.layers.Layer

通過對?tf.keras.layers.Layer?進(jìn)行子類化并實(shí)現(xiàn)以下方法來創(chuàng)建自定義層:

build:創(chuàng)建層的權(quán)重。使用?add_weight?方法添加權(quán)重。
call:定義前向傳播。
compute_output_shape:指定在給定輸入形狀的情況下如何計(jì)算層的輸出形狀。
或者,可以通過實(shí)現(xiàn)?get_config?方法和?from_config?類方法序列化層。

下面是一個(gè)使用核矩陣實(shí)現(xiàn)輸入?matmul?的自定義層示例:

class MyLayer(keras.layers.Layer):def __init__(self, output_dim, **kwargs):self.output_dim = output_dimsuper(MyLayer, self).__init__(**kwargs)def build(self, input_shape):shape = tf.TensorShape((input_shape[1], self.output_dim))# Create a trainable weight variable for this layer.self.kernel = self.add_weight(name='kernel',shape=shape,initializer='uniform',trainable=True)# Be sure to call this at the endsuper(MyLayer, self).build(input_shape)def call(self, inputs):return tf.matmul(inputs, self.kernel)def compute_output_shape(self, input_shape):shape = tf.TensorShape(input_shape).as_list()shape[-1] = self.output_dimreturn tf.TensorShape(shape)def get_config(self):base_config = super(MyLayer, self).get_config()base_config['output_dim'] = self.output_dim@classmethoddef from_config(cls, config):return cls(**config)# Create a model using the custom layer model = keras.Sequential([MyLayer(10),keras.layers.Activation('softmax')])# The compile step specifies the training configuration model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),loss='categorical_crossentropy',metrics=['accuracy'])# Trains for 5 epochs. model.fit(data, targets, batch_size=32, epochs=5)

?

總結(jié)

以上是生活随笔為你收集整理的使用tf.keras搭建mnist手写数字识别网络的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。