【TensorFlow】随机训练和批训练的比较与实现
生活随笔
收集整理的這篇文章主要介紹了
【TensorFlow】随机训练和批训练的比较与实现
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
一、隨機訓練和批訓練
| 隨機訓練 | 脫離局部最小 | 一般需要更多的迭代次數才收斂 |
| 批訓練 | 快速得到最小損失 | 耗費更多的計算資源 |
二、實現隨機訓練
import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.python.framework import ops ops.reset_default_graph() # 一、隨機訓練:# 1.創建計算圖 sess = tf.Session()# 2.創建數據 x_vals = np.random.normal(1, 0.1, 100) y_vals = np.repeat(10., 100) x_data = tf.placeholder(shape=[1], dtype=tf.float32) y_target = tf.placeholder(shape=[1], dtype=tf.float32)# 3.創建變量 A = tf.Variable(tf.random_normal(shape=[1]))# 4.增加圖操作 my_output = tf.multiply(x_data, A)# 5.聲明L2正則損失 loss = tf.square(my_output - y_target)# 6.聲明優化器 學習率為0.02 my_opt = tf.train.GradientDescentOptimizer(0.02) train_step = my_opt.minimize(loss)# 7.初始化變量 init = tf.global_variables_initializer() sess.run(init)# 8.保存loss數據用于繪圖 loss_stochastic = []# 9.開始訓練 for i in range(100):rand_index = np.random.choice(100)rand_x = [x_vals[rand_index]]rand_y = [y_vals[rand_index]]sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})if (i+1)%5==0:print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)))temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})print('Loss = ' + str(temp_loss))loss_stochastic.append(temp_loss) # 輸出結果 Step #5 A = [2.0631378] Loss = [60.90259] Step #10 A = [3.560384] Loss = [35.39518] Step #15 A = [4.7225595] Loss = [37.812637] Step #20 A = [5.681144] Loss = [13.796157] Step #25 A = [6.4919457] Loss = [13.752169] Step #30 A = [7.1609416] Loss = [9.70855] Step #35 A = [7.710085] Loss = [5.826261] Step #40 A = [8.253489] Loss = [7.3934216] Step #45 A = [8.671478] Loss = [2.5475926] Step #50 A = [8.993064] Loss = [1.32571] Step #55 A = [9.101872] Loss = [0.67589337] Step #60 A = [9.256593] Loss = [5.34419] Step #65 A = [9.329251] Loss = [0.58555096] Step #70 A = [9.421848] Loss = [3.088755] Step #75 A = [9.563117] Loss = [6.0601945] Step #80 A = [9.661991] Loss = [0.05205128] Step #85 A = [9.8208685] Loss = [2.3963788] Step #90 A = [9.8652935] Loss = [0.19284673] Step #95 A = [9.842097] Loss = [4.9211507] Step #100 A = [10.044914] Loss = [4.2354054]三、實現批訓練
import numpy as np import tensorflow as tf import matplotlib as pltfrom tensorflow.python.framework import ops ops.reset_default_graph()sess = tf.Session()# 1.聲明批量大小(一次傳入多少訓練數據) batch_size = 20# 2.聲明模型的數據、占位符和變量。 # 這里能做的是改變占位符的形狀,占位符有兩個維度: # 第一個維度為None,第二個維度是批量訓練中的數據量。 # 我們能顯式地設置維度為20,也能設為None。 # 我們必須知道訓練模型中的維度,從而阻止不合法的矩陣操作 x_vals = np.random.normal(1,0.1,100) y_vals = np.repeat(10.,100) x_data = tf.placeholder(shape=[None, 1], dtype=tf.float32) y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32) A = tf.Variable(tf.random_normal(shape=[1,1]))# 3.現在在計算圖中增加矩陣乘法操作, # 切記矩陣乘法不滿足交換律,所以在matmul()函數中的矩陣參數順序要正確: my_output = tf.multiply(x_data, A)# 4.改變損失函數 # 批量訓練時損失函數是每個數據點L2損失的平均值 loss = tf.reduce_mean(tf.square(my_output - y_target))# 5.聲明優化器 my_opt = tf.train.GradientDescentOptimizer(0.02) train_step = my_opt.minimize(loss)# 6.在訓練中通過循環迭代優化模型算法。 # 為了繪制損失值圖與隨機訓練對比 # 這里初始化一個列表每間隔5次迭代保存損失函數# 初始化變量 init = tf.global_variables_initializer() sess.run(init)loss_batch = [] for i in range(100):# 每次用0~100中取20個數作為索引值rand_index = np.random.choice(100, size=batch_size)# 轉置rand_x = np.transpose([x_vals[rand_index]])rand_y = np.transpose([y_vals[rand_index]])sess.run(train_step, feed_dict={x_data:rand_x,y_target:rand_y})if (i+1)%5 == 0:print("Step # " + str(i+1) + ' A = ' + str(sess.run(A)))temp_loss = sess.run(loss, feed_dict={x_data:rand_x,y_target:rand_y})print('Loss = ' + str(temp_loss))loss_batch.append(temp_loss) 在這里插入代碼片 # 輸出結果 Step # 5 A = [[2.626382]] Loss = 55.444374 Step # 10 A = [[3.980196]] Loss = 36.855064 Step # 15 A = [[5.0858808]] Loss = 22.765038 Step # 20 A = [[5.9751787]] Loss = 15.496961 Step # 25 A = [[6.713659]] Loss = 12.349718 Step # 30 A = [[7.2950797]] Loss = 7.5467796 Step # 35 A = [[7.782353]] Loss = 5.17468 Step # 40 A = [[8.20625]] Loss = 4.1199327 Step # 45 A = [[8.509094]] Loss = 2.6329637 Step # 50 A = [[8.760488]] Loss = 1.9998455 Step # 55 A = [[8.967735]] Loss = 1.6577679 Step # 60 A = [[9.1537]] Loss = 1.4356906 Step # 65 A = [[9.317189]] Loss = 1.9666836 Step # 70 A = [[9.387019]] Loss = 1.9287064 Step # 75 A = [[9.499526]] Loss = 1.7477573 Step # 80 A = [[9.594302]] Loss = 1.719229 Step # 85 A = [[9.666611]] Loss = 1.4769726 Step # 90 A = [[9.711805]] Loss = 1.1235845 Step # 95 A = [[9.784608]] Loss = 1.9176414 Step # 100 A = [[9.849552]] Loss = 1.1561565四、繪制圖像
plt.plot(range(0, 100, 5), loss_stochastic, 'b-', label='Stochastic Loss') plt.plot(range(0, 100, 5), loss_batch, 'r--', label='Batch Loss, size=20') plt.legend(loc='upper right', prop={'size': 11}) plt.show()
從圖中可以看出批訓練損失更平滑,隨機訓練損失更不規則
總結
以上是生活随笔為你收集整理的【TensorFlow】随机训练和批训练的比较与实现的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 转:掌握编程
- 下一篇: 从源码说说dispatchTouchEv