日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

基于tensorflow框架的神经网络结构处理mnist数据集

發布時間:2025/1/21 编程问答 15 豆豆
生活随笔 收集整理的這篇文章主要介紹了 基于tensorflow框架的神经网络结构处理mnist数据集 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

一、構建計算圖

  • 準備訓練數據
  • 定義前向計算過程 Inference
  • 定義loss(loss,accuracy等scalar用tensorboard展示)
  • 定義訓練方法
  • 變量初始化
  • 保存計算圖
  • 二、創建會話

  • summary對象處理
  • 喂入數據,得到觀測的loss,accuracy等
  • 用測試數據測試模型
  • import tensorflow as tf import numpy as np import os from tensorflow.examples.tutorials.mnist import input_data os.environ['TF_CPP_MIN_LOG_LEVEl']='3' # tf.reset_default_graph() mnist = input_data.read_data_sets('D:\MyData\zengxf\.keras\datasets\MNIST_data',one_hot=True) xq,yq = mnist.train.next_batch(2)# (2, 784),(2,10)# Inputh1 = 100 h2 = 10with tf.name_scope("Input"):X = tf.placeholder("float",[None,784],name='X')Y_true = tf.placeholder("float",[None,10],name='Y_true') with tf.name_scope("Inference"):with tf.name_scope("hidden1"):W1 = tf.Variable(tf.random_normal([784, h1])*0.1, name='W1')# W1 = tf.Variable(tf.zeros([784, h1]), name='W1')b1 = tf.Variable(tf.zeros([h1]), name='b1')y_1 = tf.nn.sigmoid(tf.matmul(X, W1)+b1)# (None,h1)with tf.name_scope("hidden2"):W2 = tf.Variable(tf.random_normal([h1, h2])*0.1, name='W2')# W2 = tf.Variable(tf.zeros([h1, h2]), name='W2')b2 = tf.Variable(tf.zeros([h2]), name='b2')y_2 = tf.nn.sigmoid(tf.matmul(y_1, W2)+b2)# (h1,h2)with tf.name_scope("Output"):W3 = tf.Variable(tf.truncated_normal([h2, 10])*0.1, name='W3')# W3 = tf.Variable(tf.zeros([h2, 10]), name='W3')b3 = tf.Variable(tf.zeros([10]), name='b3')y = tf.nn.softmax(tf.matmul(y_2, W3)+ b3)# (None,10) with tf.name_scope("Loss"):loss = tf.reduce_mean(-tf.reduce_sum(tf.multiply(Y_true,tf.log(y))))loss_scalar = tf.summary.scalar('loss',loss)accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(y,1),tf.argmax(Y_true,1)),tf.float32))accuracy_scalar = tf.summary.scalar('accuracy', accuracy)# loss = tf.reduce_mean(-tf.reduce_sum(Y_true * tf.log(y)))# l = tf.multiply(Y_true,tf.log(y))with tf.name_scope("Trian"):# optimizer = tf.train.AdamOptimizer(learning_rate=0.05)optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)train_op = optimizer.minimize(loss)init = tf.global_variables_initializer() merge_summary_op = tf.summary.merge_all() # tf.merge_all_summaries() writer = tf.summary.FileWriter('logs', tf.get_default_graph()) sess = tf.Session() sess.run(init)# for step in range(5000):# train_summary = sess.run(merge_summary_op,feed_dict = {...})#調用sess.run運行圖,生成一步的訓練過程數據train_x,train_y = mnist.train.next_batch(500)_,summary_op,train_loss,acc= sess.run([train_op,merge_summary_op,loss,accuracy],feed_dict={X:train_x,Y_true:train_y})if step%100==99:print('loss=',train_loss)# print(sess.run(y,feed_dict={X:train_x}))# summary_op = sess.run(merge_summary_op)writer.add_summary(summary_op,step) #測試集上預測 print(sess.run(accuracy, feed_dict={X: mnist.test.images, Y_true: mnist.test.labels})) # 0.9185 writer.close() # #正確的預測結果 # correct_prediction = tf.equal(tf.argmax(Y_true, 1), tf.argmax(y, 1)) # # 計算預測準確率,它們都是Tensor # accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # # 在Session中運行Tensor可以得到Tensor的值 # # 這里是獲取最終模型的正確率

    總結

    以上是生活随笔為你收集整理的基于tensorflow框架的神经网络结构处理mnist数据集的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。