日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 运维知识 > windows >内容正文

windows

【TensorFlow-windows】学习笔记四——模型构建、保存与使用

發(fā)布時(shí)間:2023/12/13 windows 33 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【TensorFlow-windows】学习笔记四——模型构建、保存与使用 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

前言

上一章研究了一些基本的構(gòu)建神經(jīng)網(wǎng)絡(luò)所需的結(jié)構(gòu):層、激活函數(shù)、損失函數(shù)、優(yōu)化器之類的,這一篇就解決上一章遺留的問題:使用CNN構(gòu)建手寫數(shù)字識(shí)別網(wǎng)絡(luò)、保存模型參數(shù)、單張圖片的識(shí)別

國際慣例,參考博客:

tensorflow之保存模型與加載模型

【tensorflow】保存模型、再次加載模型等操作

tensorflow_二分類模型之單張圖片測試

TensorFlow-Examples

訓(xùn)練數(shù)據(jù)實(shí)現(xiàn)

先貼一下仿真數(shù)據(jù)集:鏈接:https://pan.baidu.com/s/1ugEy85182vjcXQ8VoMJAbg 密碼:1o83

主要就是把手寫數(shù)字?jǐn)?shù)據(jù)集可視化成png圖片保存,利用txt文本文檔保存路徑和標(biāo)簽,看上一篇博客就懂了。話不多說,折騰起來。

數(shù)據(jù)集處理

IMG_HEIGHT = 28 # 高 IMG_WIDTH = 28 # 寬 CHANNELS = 3 # 通道數(shù) def read_images(dataset_path, batch_size):imagepaths, labels = list(), list()data = open(dataset_path, 'r').read().splitlines()for d in data:imagepaths.append(d.split(' ')[0])labels.append(int(d.split(' ')[1])) # 轉(zhuǎn)換為張量imagepaths = tf.convert_to_tensor(imagepaths, dtype=tf.string)labels = tf.convert_to_tensor(labels, dtype=tf.int32)# 建立TF隊(duì)列,打亂數(shù)據(jù)image, label = tf.train.slice_input_producer([imagepaths, labels],shuffle=True)# 讀取數(shù)據(jù)image = tf.read_file(image)image = tf.image.decode_jpeg(image, channels=CHANNELS)# 將圖像resize成規(guī)定大小image = tf.image.resize_images(image, [IMG_HEIGHT, IMG_WIDTH])# 手動(dòng)歸一化image = image * 1.0/127.5 - 1.0# 創(chuàng)建batchinputX, inputY = tf.train.batch([image, label], batch_size=batch_size,capacity=batch_size * 8,num_threads=4)return inputX, inputY

構(gòu)建網(wǎng)絡(luò)

#訓(xùn)練參數(shù) learning_rate = 0.001 num_steps = 1000 batch_size = 128 display_step = 10 #網(wǎng)絡(luò)參數(shù) num_classes = 10 dropout = 0.75#定義卷積操作 def conv2d(x, W, b, strides=1): x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')x = tf.nn.bias_add(x, b)return tf.nn.relu(x) #定義池化操作 def maxpool2d(x, k=2):return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],padding='SAME') #定義網(wǎng)絡(luò)結(jié)構(gòu) def conv_net(x, weights, biases, dropout):#輸入數(shù)據(jù)x = tf.reshape(x, shape=[-1, IMG_HEIGHT, IMG_WIDTH, CHANNELS])#第一層卷積conv1 = conv2d(x, weights['wc1'], biases['bc1'])conv1 = maxpool2d(conv1, k=2)#第二層卷積conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])conv2 = maxpool2d(conv2, k=2)#全連接fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])fc1 = tf.nn.relu(fc1)fc1 = tf.nn.dropout(fc1, dropout)# 輸出out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])return out#初始化權(quán)重,以后便于保存 weights = {'wc1': tf.Variable(tf.random_normal([5, 5, CHANNELS, 32])),'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),'out': tf.Variable(tf.random_normal([1024, num_classes])) } #初始化偏置,以后便于保存 biases = {'bc1': tf.Variable(tf.random_normal([32])),'bc2': tf.Variable(tf.random_normal([64])),'bd1': tf.Variable(tf.random_normal([1024])),'out': tf.Variable(tf.random_normal([num_classes])) }

定義網(wǎng)絡(luò)輸入輸出

注意:如果想留個(gè)接口以便后續(xù)丟入單張圖片進(jìn)行測試,一定要留輸入接口,方法如下:

#定義圖結(jié)構(gòu)的輸入 X = tf.placeholder(tf.float32, [None, IMG_HEIGHT, IMG_WIDTH,CHANNELS],name='X') Y = tf.placeholder(tf.float32, [None, num_classes],name='Y') keep_prob = tf.placeholder(tf.float32,name='keep_prob')

一定要輸入name,因?yàn)楹罄m(xù)我們?nèi)∵@個(gè)placeholder就是依據(jù)這個(gè)名字來取的,因?yàn)檫@個(gè)方法比較常用

定義損失、優(yōu)化器、訓(xùn)練/評(píng)估函數(shù)

# 構(gòu)建模型 logits = conv_net(X, weights, biases, keep_prob) prediction = tf.nn.softmax(logits,name='prediction') #損失函數(shù) loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=Y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) train_op = optimizer.minimize(loss_op) #評(píng)估函數(shù) correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

訓(xùn)練

記得單熱度編碼tf.one_hot()

#保存模型 saver=tf.train.Saver() # tf.add_to_collection('predict',prediction) #訓(xùn)練 init = tf.global_variables_initializer()#初始化模型 print('讀取數(shù)據(jù)集:') input_img,input_label=read_images('./mnist/train_labels.txt',batch_size=batch_size) print('訓(xùn)練模型') with tf.Session() as sess:coord=tf.train.Coordinator()sess.run(init)#初始化參數(shù) tf.train.start_queue_runners(sess=sess,coord=coord)for step in range(1, num_steps+1): batch_x, batch_y = sess.run([input_img,tf.one_hot(input_label,num_classes,1,0)]) sess.run(train_op, feed_dict={X: batch_x, Y: batch_y, keep_prob: 0.8})if step % display_step == 0 or step == 1:# Calculate batch loss and accuracyloss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,Y: batch_y,keep_prob: 1.0})print("Step " + str(step) + ", Minibatch Loss= " + \"{:.4f}".format(loss) + ", Training Accuracy= " + \"{:.3f}".format(acc))coord.request_stop()coord.join()print("Optimization Finished!")saver.save(sess,'./cnn_mnist_model/CNN_Mnist')

輸出:

讀取數(shù)據(jù)集: 訓(xùn)練模型 2018-08-03 12:38:05.596648: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2018-08-03 12:38:05.882851: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1392] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.759 pciBusID: 0000:01:00.0 totalMemory: 6.00GiB freeMemory: 4.96GiB 2018-08-03 12:38:05.889153: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1471] Adding visible gpu devices: 0 2018-08-03 12:38:06.600411: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-08-03 12:38:06.604687: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:958] 0 2018-08-03 12:38:06.606494: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0: N 2018-08-03 12:38:06.608588: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4726 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1) Step 1, Minibatch Loss= 206875.5000, Training Accuracy= 0.219 Step 10, Minibatch Loss= 71490.0000, Training Accuracy= 0.164 Step 20, Minibatch Loss= 27775.7266, Training Accuracy= 0.398 Step 30, Minibatch Loss= 15692.2725, Training Accuracy= 0.641 Step 40, Minibatch Loss= 18211.4141, Training Accuracy= 0.625 Step 50, Minibatch Loss= 7250.1758, Training Accuracy= 0.789 Step 60, Minibatch Loss= 10694.9902, Training Accuracy= 0.750 Step 70, Minibatch Loss= 10783.8535, Training Accuracy= 0.766 Step 80, Minibatch Loss= 6080.1138, Training Accuracy= 0.844 Step 90, Minibatch Loss= 6720.9380, Training Accuracy= 0.867 Step 100, Minibatch Loss= 3673.7524, Training Accuracy= 0.922 Step 110, Minibatch Loss= 7893.8228, Training Accuracy= 0.836 Step 120, Minibatch Loss= 6805.0176, Training Accuracy= 0.852 Step 130, Minibatch Loss= 2863.3728, Training Accuracy= 0.906 Step 140, Minibatch Loss= 3335.6992, Training Accuracy= 0.883 Step 150, Minibatch Loss= 3514.4031, Training Accuracy= 0.914 Step 160, Minibatch Loss= 1842.5328, Training Accuracy= 0.945 Step 170, Minibatch Loss= 3443.9966, Training Accuracy= 0.914 Step 180, Minibatch Loss= 1961.7180, Training Accuracy= 0.945 Step 190, Minibatch Loss= 2919.5215, Training Accuracy= 0.898 Step 200, Minibatch Loss= 4270.7686, Training Accuracy= 0.891 Step 210, Minibatch Loss= 3591.2534, Training Accuracy= 0.922 Step 220, Minibatch Loss= 4692.2163, Training Accuracy= 0.867 Step 230, Minibatch Loss= 1537.0554, Training Accuracy= 0.914 Step 240, Minibatch Loss= 3574.1797, Training Accuracy= 0.898 Step 250, Minibatch Loss= 5143.3276, Training Accuracy= 0.898 Step 260, Minibatch Loss= 2142.9756, Training Accuracy= 0.922 Step 270, Minibatch Loss= 1323.6707, Training Accuracy= 0.945 Step 280, Minibatch Loss= 2004.2051, Training Accuracy= 0.961 Step 290, Minibatch Loss= 1112.9484, Training Accuracy= 0.938 Step 300, Minibatch Loss= 1977.6018, Training Accuracy= 0.922 Step 310, Minibatch Loss= 876.0104, Training Accuracy= 0.977 Step 320, Minibatch Loss= 3448.3142, Training Accuracy= 0.953 Step 330, Minibatch Loss= 1173.9749, Training Accuracy= 0.961 Step 340, Minibatch Loss= 2152.9966, Training Accuracy= 0.938 Step 350, Minibatch Loss= 3113.6838, Training Accuracy= 0.938 Step 360, Minibatch Loss= 1779.6680, Training Accuracy= 0.922 Step 370, Minibatch Loss= 2738.2637, Training Accuracy= 0.930 Step 380, Minibatch Loss= 1666.9695, Training Accuracy= 0.922 Step 390, Minibatch Loss= 2076.6716, Training Accuracy= 0.914 Step 400, Minibatch Loss= 3356.1475, Training Accuracy= 0.914 Step 410, Minibatch Loss= 1222.7729, Training Accuracy= 0.953 Step 420, Minibatch Loss= 2422.6355, Training Accuracy= 0.898 Step 430, Minibatch Loss= 4377.9385, Training Accuracy= 0.914 Step 440, Minibatch Loss= 1566.1058, Training Accuracy= 0.969 Step 450, Minibatch Loss= 3540.1555, Training Accuracy= 0.875 Step 460, Minibatch Loss= 1136.4354, Training Accuracy= 0.961 Step 470, Minibatch Loss= 2821.9456, Training Accuracy= 0.938 Step 480, Minibatch Loss= 1804.5267, Training Accuracy= 0.945 Step 490, Minibatch Loss= 625.0988, Training Accuracy= 0.977 Step 500, Minibatch Loss= 2406.8958, Training Accuracy= 0.930 Step 510, Minibatch Loss= 1198.2866, Training Accuracy= 0.961 Step 520, Minibatch Loss= 680.7784, Training Accuracy= 0.953 Step 530, Minibatch Loss= 2329.2104, Training Accuracy= 0.961 Step 540, Minibatch Loss= 848.0190, Training Accuracy= 0.945 Step 550, Minibatch Loss= 1327.9423, Training Accuracy= 0.938 Step 560, Minibatch Loss= 1020.9082, Training Accuracy= 0.961 Step 570, Minibatch Loss= 1885.4563, Training Accuracy= 0.922 Step 580, Minibatch Loss= 820.5620, Training Accuracy= 0.953 Step 590, Minibatch Loss= 1448.5205, Training Accuracy= 0.938 Step 600, Minibatch Loss= 857.7993, Training Accuracy= 0.969 Step 610, Minibatch Loss= 1193.5856, Training Accuracy= 0.930 Step 620, Minibatch Loss= 1337.5518, Training Accuracy= 0.961 Step 630, Minibatch Loss= 2121.9165, Training Accuracy= 0.953 Step 640, Minibatch Loss= 1516.9609, Training Accuracy= 0.938 Step 650, Minibatch Loss= 666.7323, Training Accuracy= 0.977 Step 660, Minibatch Loss= 1004.4291, Training Accuracy= 0.953 Step 670, Minibatch Loss= 193.3173, Training Accuracy= 0.984 Step 680, Minibatch Loss= 1339.3765, Training Accuracy= 0.945 Step 690, Minibatch Loss= 709.9714, Training Accuracy= 0.961 Step 700, Minibatch Loss= 1380.6301, Training Accuracy= 0.953 Step 710, Minibatch Loss= 630.5464, Training Accuracy= 0.977 Step 720, Minibatch Loss= 667.1447, Training Accuracy= 0.953 Step 730, Minibatch Loss= 1253.6014, Training Accuracy= 0.977 Step 740, Minibatch Loss= 473.8666, Training Accuracy= 0.984 Step 750, Minibatch Loss= 809.3101, Training Accuracy= 0.961 Step 760, Minibatch Loss= 508.8592, Training Accuracy= 0.984 Step 770, Minibatch Loss= 308.9244, Training Accuracy= 0.969 Step 780, Minibatch Loss= 1291.0034, Training Accuracy= 0.984 Step 790, Minibatch Loss= 1884.8574, Training Accuracy= 0.938 Step 800, Minibatch Loss= 1481.6635, Training Accuracy= 0.961 Step 810, Minibatch Loss= 463.2684, Training Accuracy= 0.969 Step 820, Minibatch Loss= 1116.5591, Training Accuracy= 0.961 Step 830, Minibatch Loss= 2422.9155, Training Accuracy= 0.953 Step 840, Minibatch Loss= 471.8990, Training Accuracy= 0.984 Step 850, Minibatch Loss= 1480.4053, Training Accuracy= 0.945 Step 860, Minibatch Loss= 1062.6339, Training Accuracy= 0.938 Step 870, Minibatch Loss= 833.3881, Training Accuracy= 0.953 Step 880, Minibatch Loss= 2153.9014, Training Accuracy= 0.953 Step 890, Minibatch Loss= 1617.7456, Training Accuracy= 0.953 Step 900, Minibatch Loss= 347.2119, Training Accuracy= 0.969 Step 910, Minibatch Loss= 175.5020, Training Accuracy= 0.977 Step 920, Minibatch Loss= 680.8482, Training Accuracy= 0.969 Step 930, Minibatch Loss= 240.1681, Training Accuracy= 0.977 Step 940, Minibatch Loss= 882.4927, Training Accuracy= 0.977 Step 950, Minibatch Loss= 407.1322, Training Accuracy= 0.977 Step 960, Minibatch Loss= 300.9460, Training Accuracy= 0.969 Step 970, Minibatch Loss= 1848.9391, Training Accuracy= 0.945 Step 980, Minibatch Loss= 496.5137, Training Accuracy= 0.969 Step 990, Minibatch Loss= 473.6212, Training Accuracy= 0.969 Step 1000, Minibatch Loss= 124.8958, Training Accuracy= 0.992 Optimization Finished!

【更新日志】2019-9-2
在tf.train.Saver中還可以提供額外三個(gè)參數(shù),詳細(xì)可以參考官方文檔

  • max_to_keep:保存多少個(gè)最新模型,避免模型一直存,存到磁盤滿了,當(dāng)存到設(shè)置的數(shù)值時(shí),會(huì)刪掉前面的一個(gè)模型,保證當(dāng)前存儲(chǔ)的模型數(shù)目是設(shè)置值
  • keep_checkpoint_every_n_hours:每多少小時(shí)保存一次模型

模型載入與測試單張圖片

我折騰很久的原因就在于:在定義訓(xùn)練網(wǎng)絡(luò)的輸入?yún)?shù)時(shí),沒有給name值,導(dǎo)致get_tensor_by_name一直取不出來對(duì)應(yīng)輸入接口,測試圖片無法丟入到網(wǎng)絡(luò)中,最后發(fā)現(xiàn)的時(shí)候氣出豬叫。

讀圖片

直接使用opencv的讀取函數(shù)處理圖像,記得與訓(xùn)練時(shí)圖像的處理方法一致,最后要reshape成(1,28,28,3)(1,28,28,3)(1,28,28,3)大小,其實(shí)這里也卡了好久,待會(huì)后面再說

images = [] image = cv2.imread('./mnist/test/5/5_9.png') images.append(image) images = np.array(images, dtype=np.uint8) images = images.astype('float32') images = np.subtract(np.multiply(images, 1.0/127.5) , 1.0) x_batch = images.reshape(1,28,28,3)

載入模型

saver=tf.train.import_meta_graph('./cnn_mnist_model/CNN_Mnist.meta') saver.restore(sess,'./cnn_mnist_model/CNN_Mnist')

預(yù)測

取出預(yù)測函數(shù)和測試圖片接收器:

graph=tf.get_default_graph() pred=graph.get_tensor_by_name('prediction:0') #X = graph.get_operation_by_name('X').outputs[0] X=graph.get_tensor_by_name('X:0') keep_prob=graph.get_tensor_by_name('keep_prob:0')

直接預(yù)測

result=sess.run(pred,feed_dict={X:x_batch,keep_prob:1.0}) print(result)

結(jié)果

2018-08-03 12:46:50.098990: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:958] 0 2018-08-03 12:46:50.101351: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0: N 2018-08-03 12:46:50.104446: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4726 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1) [[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]]

我也預(yù)測了另外幾張圖片,結(jié)果基本沒問題。

上面說讀取圖片卡了很久,原因在于我之前使用訓(xùn)練時(shí)候讀取圖片的方法來讀圖片,也就是使用tensorflow的image類來讀圖片:

image = tf.read_file('./mnist/test/5/5_9.png')image = tf.image.decode_jpeg(image, channels=3)# 將圖像resize成規(guī)定大小image = tf.image.resize_images(image, [28, 28])# 手動(dòng)歸一化image = image * 1.0/127.5 - 1.0image = tf.reshape(image, shape=[1, 28, 28, 3])

出現(xiàn)如下錯(cuò)誤:

TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles.For reference, the tensor object was Tensor("Reshape:0", shape=(1, 28, 28, 3), dtype=float32) which was passed to the feed with key Tensor("X:0", shape=(?, 28, 28, 3), dtype=float32).

這個(gè)錯(cuò)誤意思是說無法將tf.Tensor類型的變量feed給輸入接口,能夠接受的有Pyhon的巴拉巴拉類型,真皮。所以又得把tf.Tensor的值取出來,如果看過之前的博客,就知道用eval()取值,跟Theano一模一樣:

將原來測試的那句話

result=sess.run(pred,feed_dict={X:image,keep_prob:1.0})

改成:

result=sess.run(pred,feed_dict={X:image.eval(),keep_prob:1.0})

就闊以了,所以還是建議測試的時(shí)候直接用numpy變量處理,不要折騰成tensorflow的變量類型,最后還得轉(zhuǎn)回來。

后記

好羨慕TensorLayer和tflearn那么簡介的模型構(gòu)建、保存、載入方法,好想脫坑。

博文代碼:

訓(xùn)練:鏈接:https://pan.baidu.com/s/1zZYGgnGj3kttklzZnyJLQA 密碼:zepl

測試:鏈接:https://pan.baidu.com/s/1BygjSatjxtuVIq_HHt9o7A 密碼:ky87

創(chuàng)作挑戰(zhàn)賽新人創(chuàng)作獎(jiǎng)勵(lì)來咯,堅(jiān)持創(chuàng)作打卡瓜分現(xiàn)金大獎(jiǎng)

總結(jié)

以上是生活随笔為你收集整理的【TensorFlow-windows】学习笔记四——模型构建、保存与使用的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。