Tensorflow深度学习应用(进阶篇)-回归(函数拟合训练)-可视化
生活随笔
收集整理的這篇文章主要介紹了
Tensorflow深度学习应用(进阶篇)-回归(函数拟合训练)-可视化
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
#coding=gbk
'''
進階篇:多元回歸:建模問題:Y=x1xx1+x2xw2+x3xw3+...+xnxwn+b,矩陣簡化表示Y=XW+bnumpy庫基礎:整型的一個數字, 不能取得其shape(維度),通過np.array()可以將其轉換成一個標量(數組),可以取得shape;一般問題:需要對重要的特征以及次要特征進行范圍的限定,即需要對數據進行歸一化,有助于模型的收斂。特征值/(最大特征值-最小特征值)~[0,1]特征數據歸一化:特征值/(最大特征值-最小特征值)~[0,1],標簽不做處理擴展篇
'''from pylab import mpl
mpl.rcParams['font.sans-serif'] = ['SimHei'] #設置顯示繪圖顯示中文
mpl.rcParams['axes.unicode_minus'] = False #防止中文亂碼,有時候第一句不能完全避免顯示錯誤#導入tensorflow 模塊
import tensorflow.compat.v1 as tf
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.datasets import load_boston#下載數據
from sklearn.utils import shuffle#用于打亂數據#numpy庫運用
#將一個整型數字轉換成一個數組
s_v = 20
scalar_np = np.array(s_v)
print("標量:\n",scalar_np,scalar_np.shape)#向量
vector_v = [1, 2, 3, 4, 5, 6, 7, 8, 9]
vector_np = np.array(vector_v)
print("向量:\n",vector_np, vector_np.shape)#矩陣
m_v = [[1, 2, 3], [7, 8, 9], [4, 5, 6]]
m_np = np.array(m_v)
print("矩陣:\n",m_np,m_np.shape)#行向量、列向量的表示
row = np.array([[1, 2, 3]])
print("行向量:\n",row,row.shape)column = np.array([[1], [2], [3]])
print("列向量:\n", column, column.shape)#矩陣的運算
a = np.array([[1, 2, 3], [4, 5, 6]])
print(a)
a = a + 7
print(a)
a = a * 2
print(a)
a = a + a
print(a)
a = a - 3
print(a)
a = a / 2.0
print(a)
#對于+,-,*,/,都是對應元素做相應的操作#行列轉置:aij=aji,reshape()
m = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
print(m.T)#矩陣的點積,對應元素相乘,np.multiply()#矩陣叉乘:兩矩陣維度為 MxN,NxK,使用函數np.matmul(a,b)
m_a = np.array([[1, 2, 3]])
m_b = np.array([[3], [2], [1]])
m_c = np.matmul(m_a, m_b)
print(m_c)#下載數據,設置return_X_y=True去除摘要只返回數據和標簽
boston_price_data ,_= load_boston(return_X_y=True)
print(boston_price_data, boston_price_data.shape)#數據歸一化[0,1]
for i in range(12):boston_price_data[:, i] = boston_price_data[:, i] / (boston_price_data[:, i].max() - boston_price_data[:, i].min())
#取出數據
x_d = boston_price_data[:, :12]
print(x_d, x_d.shape)
print("\n")
#取出標簽
y_d = boston_price_data[:, 12]
print(y_d, y_d.shape)#模型定義
x = tf.placeholder(tf.float32, [None, 12], name='x')
y = tf.placeholder(tf.float32, [None, 1], name='y')#擬合Y=x1xx1+x2xw2+x3xw3+...+xnxwn+b,矩陣簡化表示Y=XW+b
with tf.name_scope("model"):#將下面的子圖打包,使計算圖簡介,便于tensorboard查看w = tf.Variable(tf.random_normal([12, 1], stddev=0.01, name='w'))#隨機初始化w數組b = tf.Variable(1.0, name='b') #初始化bdef model(x, w, b):return tf.matmul(x, w) + b #矩陣叉乘predict = model(x, w, b) #預測模型#模型訓練
train_c = 80
learning_rate = 0.01
with tf.name_scope("LossFun"):#打包loss_Fun = tf.reduce_mean(tf.pow(y - predict, 2))#均方誤差
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss_Fun)
#優化器設置sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)loss_l = []
for i in range(train_c):loss_s = 0.0for x_, y_ in zip(x_d, y_d):x_ = x_.reshape(1, 12)#需要將取出來的數據進行維度調整,是x_,y_適合x,y變量維度模型y_=y_.reshape(1,1)_, loss = sess.run([optimizer, loss_Fun], feed_dict={x: x_, y: y_})'''用于可視化的語句_,summary_s, loss = sess.run([optimizer,summary_loss_op, loss_Fun], feed_dict={x: x_, y: y_})write.add_summary(summary_s,i)'''loss_s = loss_s + lossx_v, y_v = shuffle(x_d, y_d) #打亂數據b0 = b.eval(session=sess)w0 = w.eval(session=sess)less_average = loss_s / len(y_d)loss_l.append(less_average)print("train count=", i + 1, "loss=", less_average, "b=", b0, "w=", w0)#模型驗證
x_test = x_d[430]
x_test = x_test.reshape(1, 12)
p = sess.run(predict, feed_dict={x: x_test})
print("預測值:%f"%p,"標簽值:%f\n"%y_d[430])plt.plot(loss_l)
plt.title("損失變化曲線(loss)")
plt.show()logdir = "E:/VSCODE/"
if tf.gfile.Exists(logdir):tf.gfile.DeleteRecursively(logdir)'''tensorboard可視化數據,
summary_loss_op=tf.summary.scalar("loss",loss_Fun)#記錄損失值loss,寫入到tensorboard中的SCALARS欄中
merged=tf.summary.merge_all()#將所有需要的日志文件合并寫入
'''
write = tf.summary.FileWriter(logdir, tf.get_default_graph())
write.close()
附:
本文章學習至中國大學mooc-深度學習應用開發-Tensorflow實戰
總結
以上是生活随笔為你收集整理的Tensorflow深度学习应用(进阶篇)-回归(函数拟合训练)-可视化的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Tensorflow深度学习应用(进阶篇
- 下一篇: 深度学习环境搭建之Anaconda安装k