Tensorflow学习
- github地址:https://github.com/lawlite19/MachineLearning_TensorFlow
一、TensorFlow介紹
1、什么是TensorFlow
- 官網:https://www.tensorflow.org/
- TensorFlow是Google開發的一款神經網絡的Python外部的結構包, 也是一個采用數據流圖來進行數值計算的開源軟件庫.
- 先繪制計算結構圖, 也可以稱是一系列可人機交互的計算操作, 然后把編輯好的Python文件 轉換成 更高效的C++, 并在后端進行計算.
2、TensorFlow強大之處
- 擅長的任務就是訓練深度神經網絡
- 快速的入門神經網絡,大大降低了深度學習(也就是深度神經網絡)的開發成本和開發難度
- TensorFlow 的開源性, 讓所有人都能使用并且維護
3、安裝TensorFlow
- 暫不支持Windows下安裝TensorFlow,可以在虛擬機里使用或者安裝Docker安裝
- 這里在CentOS6.5下進行安裝
-
安裝Python2.7,默認CentOS中安裝的是Python2.6
-
先安裝zlib的依賴,下面安裝easy_install時會用到
12 yum install zlibyum install zlib-devel -
在安裝openssl的依賴,下面安裝pip時會用到
12 yum install opensslyum install openssl-devel -
下載安裝包,我傳到github上的安裝包,https協議后面加上--no-check-certificate,:
1 wget https://raw.githubusercontent.com/lawlite19/LinuxSoftware/master/python/Python-2.7.12.tgz --no-check-certificate -
解壓縮:tar -zxvf xxx
- 進入,配置:./configure --prefix=/usr/local/python2.7
- 編譯并安裝:make && make install
- 創建鏈接來使系統默認python變為python2.7,
ln -fs /usr/local/python2.7/bin/python2.7 /usr/bin/python - 修改一下yum,因為yum的執行文件還是需要原來的python2.6,vim /usr/bin/yum
1 #!/usr/bin/python
修改為系統原有的python版本地址
1 #!/usr/bin/python2.6 -
-
安裝easy_install
- 下載:wget https://raw.githubusercontent.com/lawlite19/LinuxSoftware/blob/master/python/setuptools-26.1.1.tar.gz --no-check-certificate
- 解壓縮:tar -zxvf xxx
- python setup.py build?#注意這里python是新的python2.7
- python setup.py install
- 到/usr/local/python2.7/bin目錄下查看就會看到easy_install了
- 創建一個軟連接:ln -s /usr/local/python2.7/bin/easy_install /usr/local/bin/easy_install
- 就可以使用easy_install 包名?進行安裝
-
安裝pip
- 下載:
- 解壓縮:tar -zxvf xxx
- 安裝:python setup.py install
- 到/usr/local/python2.7/bin目錄下查看就會看到pip了
- 同樣創建軟連接:ln -s /usr/local/python2.7/bin/pip /usr/local/bin/pip
- 就可以使用pip install 包名進行安裝包了
-
安裝wingIDE
- 默認安裝到/usr/local/lib下,進入,執行./wing命令即可執行
- 創建軟連接:ln -s /usr/local/lib/wingide5.1/wing /usr/local/bin/wing
- 破解:
-
[另]安裝VMwareTools,可以在windows和Linux之間復制粘貼
- 啟動CentOS
- 選擇VMware中的虛擬機–>安裝VMware Tools
- 會自動彈出VMware Tools的文件夾
- 拷貝一份到root目錄下?cp VMwareTools-9.9.3-2759765.tar.gz /root
- 解壓縮?tar -zxvf VMwareTools-9.9.3-2759765.tar.gz
- 進入目錄執行,vmware-install.pl,一路回車下去即可
- 重啟CentOS即可
-
安裝numpy
- 直接安裝沒有出錯
-
安裝scipy
- 安裝依賴:yum install bzip2-devel pcre-devel ncurses-devel readline-devel tk-devel gcc-c++ lapack-devel
- 安裝即可:pip install scipy
-
安裝matplotlib
- 安裝依賴:yum install libpng-devel
- 安裝即可:pip install matplotlib
- 運行可能有以下的錯誤:
1 ImportError: No module named _tkinter
安裝:tcl8.5.9-src.tar.gz
- 進入安裝即可,./confgiure make make install
安裝:tk8.5.9-src.tar.gz - 進入安裝即可。
- [注意]要重新安裝一下Pyhton2.7才能鏈接到tkinter
-
安裝scikit-learn
- 直接安裝沒有出錯,但是缺少包bz2
- 將系統中python2.6的bz2復制到python2.7對應文件夾下
1 cp /usr/lib/python2.6/lib-dynload/bz2.so /usr/local/python2.7/lib/python2.7/lib-dynload
-
安裝TensorFlow
- 官網點擊
-
選擇對應的版本
1234567891011121314151617181920212223242526272829303132 # Ubuntu/Linux 64-bit, CPU only, Python 2.7$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.12.0rc0-cp27-none-linux_x86_64.whl# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-0.12.0rc0-cp27-none-linux_x86_64.whl# Mac OS X, CPU only, Python 2.7:$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0rc0-py2-none-any.whl# Mac OS X, GPU enabled, Python 2.7:$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-0.12.0rc0-py2-none-any.whl# Ubuntu/Linux 64-bit, CPU only, Python 3.4$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.12.0rc0-cp34-cp34m-linux_x86_64.whl# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-0.12.0rc0-cp34-cp34m-linux_x86_64.whl# Ubuntu/Linux 64-bit, CPU only, Python 3.5$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.12.0rc0-cp35-cp35m-linux_x86_64.whl# Ubuntu/Linux 64-bit, GPU enabled, Python 3.5# Requires CUDA toolkit 8.0 and CuDNN v5. For other versions, see "Installing from sources" below.$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-0.12.0rc0-cp35-cp35m-linux_x86_64.whl# Mac OS X, CPU only, Python 3.4 or 3.5:$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0rc0-py3-none-any.whl# Mac OS X, GPU enabled, Python 3.4 or 3.5:$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-0.12.0rc0-py3-none-any.whl -
對應python版本
12345 # Python 2$ sudo pip install --upgrade $TF_BINARY_URL# Python 3$ sudo pip3 install --upgrade $TF_BINARY_URL -
可能缺少依賴glibc,看對應提示的版本,
- 還有可能報錯
1 ImportError: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.19' not found (required by /usr/local/python2.7/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so)
-
安裝對應版本的glibc
- 查看現有版本的glibc,?strings /lib64/libc.so.6 |grep GLIBC
- 下載對應版本:wget http://ftp.gnu.org/gnu/glibc/glibc-2.17.tar.gz
- 解壓縮:tar -zxvf glibc-2.17
- 進入文件夾創建build文件夾cd glibc-2.17 && mkdir build
-
配置:
123456 ../configure \--prefix=/usr \--disable-profile \--enable-add-ons \--enable-kernel=2.6.25 \--libexecdir=/usr/lib/glibc -
編譯安裝:make && make install
- 可以再用命令:strings /lib64/libc.so.6 |grep GLIBC查看
-
添加GLIBCXX_3.4.19的支持
- 下載:wget https://raw.githubusercontent.com/lawlite19/LinuxSoftware/master/python2.7_tensorflow/libstdc++.so.6.0.20
- 復制到/usr/lib64文件夾下:cp libstdc++.so.6.0.20 /usr/lib64/
- 添加執行權限:chmod +x /usr/lib64/libstdc++.so.6.0.20
- 刪除原來的:rm -rf /usr/lib64/libstdc++.so.6
- 創建軟連接:ln -s /usr/lib64/libstdc++.so.6.0.20 /usr/lib64/libstdc++.so.6
- 可以查看是否有個版本:strings /usr/lib64/libstdc++.so.6 | grep GLIBCXX
-
運行還可能報錯編碼的問題,這里安裝0.10.0版本:pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp27-none-linux_x86_64.whl
-
安裝pandas
- pip install pandas沒有問題
二、TensorFlow基礎架構
1、處理結構
- Tensorflow 首先要定義神經網絡的結構,然后再把數據放入結構當中去運算和 training
- TensorFlow是采用數據流圖(data flow graphs)來計算
- 首先我們得創建一個數據流流圖
- 然后再將我們的數據(數據以張量(tensor)的形式存在)放在數據流圖中計算
- 張量(tensor):
- 張量有多種. 零階張量為 純量或標量 (scalar) 也就是一個數值. 比如?1
- 一階張量為 向量 (vector), 比如 一維的 [1, 2, 3]
- 二階張量為 矩陣 (matrix), 比如 二維的 [[1, 2, 3],[4, 5, 6],[7, 8, 9]]
- 以此類推, 還有 三階 三維的 …
2、一個例子
-
求y=1*x+3中的權重1和偏置3
-
定義這個函數
12 x_data = np.random.rand(100).astype(np.float32)y_data = x_data*1.0+3.0 -
創建TensorFlow結構
1234567 Weights = tf.Variable(tf.random_uniform([1], -1.0, 1.0)) # 創建變量Weight是,范圍是 -1.0~1.0biases = tf.Variable(tf.zeros([1])) # 創建偏置,初始值為0y = Weights*x_data+biases # 定義方程loss = tf.reduce_mean(tf.square(y-y_data)) # 定義損失,為真實值減去我們每一步計算的值optimizer = tf.train.GradientDescentOptimizer(0.5) # 0.5 是學習率train = optimizer.minimize(loss) # 使用梯度下降優化init = tf.initialize_all_variables() # 初始化所有變量 -
定義Session
12 sess = tf.Session()sess.run(init) -
輸出結果
1234 for i in range(201):sess.run(train)if i%20 == 0:print i,sess.run(Weights),sess.run(biases)
結果為:
1234567891011 0 [ 1.60895896] [ 3.67376709]20 [ 1.04673827] [ 2.97489643]40 [ 1.011392] [ 2.99388123]60 [ 1.00277638] [ 2.99850869]80 [ 1.00067675] [ 2.99963641]100 [ 1.00016499] [ 2.99991131]120 [ 1.00004005] [ 2.99997854]140 [ 1.00000978] [ 2.99999475]160 [ 1.0000025] [ 2.99999857]180 [ 1.00000119] [ 2.99999928]200 [ 1.00000119] [ 2.99999928] -
3、Session會話控制
- 運行?session.run()?可以獲得你要得知的運算結果, 或者是你所要運算的部分
- 定義常量矩陣:tf.constant([[3,3]])
- 矩陣乘法 :tf.matmul(matrix1,matrix2)
-
運行Session的兩種方法:
-
手動關閉
123 sess = tf.Session()print sess.run(product)sess.close() -
使用with,執行完會自動關閉
12 with tf.Session() as sess:print sess.run(product)
-
4、Variable變量
- 定義變量:tf.Variable()
- 初始化所有變量:init = tf.initialize_all_variables()
- 需要再在 sess 里,?sess.run(init)?, 激活變量
- 輸出時,一定要把 sess 的指針指向變量再進行?print?才能得到想要的結果
5、Placeholder傳入值
- 首先定義Placeholder,然后在Session.run()的時候輸入值
- placeholder?與?feed_dict={}?是綁定在一起出現的
1234567 input1 = tf.placeholder(tf.float32) #在 Tensorflow 中需要定義 placeholder 的 type ,一般為 float32 形式input2 = tf.placeholder(tf.float32)output = tf.mul(input1,input2) # 乘法運算with tf.Session() as sess:print sess.run(output,feed_dict={input1:7.,input2:2.}) # placeholder 與 feed_dict={} 是綁定在一起出現的
三、定義一個神經網絡
1、添加層函數add_layer()
| 12345678910 | '''參數:輸入數據,前一層size,當前層size,激活函數'''def add_layer(inputs,in_size,out_size,activation_function=None):Weights = tf.Variable(tf.random_normal([in_size,out_size])) #隨機初始化權重biases = tf.Variable(tf.zeros([1,out_size]) + 0.1) # 初始化偏置,+0.1Ws_plus_b = tf.matmul(inputs,Weights) + biases # 未使用激活函數的值if activation_function is None:outputs = Ws_plus_belse:outputs = activation_function(Ws_plus_b) # 使用激活函數激活return outputs |
2、構建神經網絡
-
定義二次函數
123 x_data = np.linspace(-1,1,300,dtype=np.float32)[:,np.newaxis]noise = np.random.normal(0,0.05,x_data.shape).astype(np.float32)y_data = np.square(x_data)-0.5+noise -
定義Placeholder,用于后期輸入數據
12 xs = tf.placeholder(tf.float32,[None,1]) # None代表無論輸入有多少都可以,只有一個特征,所以這里是1ys = tf.placeholder(tf.float32,[None,1]) -
定義神經層layer
1 layer1 = add_layer(xs, 1, 10, activation_function=tf.nn.relu) # 第一層,輸入層為1,隱含層為10個神經元,Tensorflow 自帶的激勵函數tf.nn.relu -
定義輸出層
1 prediction = add_layer(layer1, 10, 1) # 利用上一層作為輸入 -
計算loss損失
1 loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction),reduction_indices=[1])) # 對二者差的平方求和再取平均 -
梯度下降最小化損失
1 train = tf.train.GradientDescentOptimizer(0.1).minimize(loss) -
初始化所有變量
1 init = tf.initialize_all_variables() -
定義Session
12 sess = tf.Session()sess.run(init) -
輸出
1234 for i in range(1000):sess.run(train,feed_dict={xs:x_data,ys:y_data})if i%50==0:print sess.run(loss,feed_dict={xs:x_data,ys:y_data})
結果:
| 1234567891011121314151617181920 | 0.454020.01453640.007213180.00642150.006144930.005993070.005875780.005770390.005671720.005580080.005495460.005415950.005340590.005261390.005188730.005114030.005040630.00496130.00488740.004819 |
3、可視化結果
- 顯示數據
12345 fig = plt.figure()ax = fig.add_subplot(111)ax.scatter(x_data,y_data)plt.ion() # 繪畫之后不暫停plt.show()
- 動態繪畫
123456789101112131415 try:ax.lines.remove(lines[0]) # 每次繪畫需要移除上次繪畫的結果,放在try catch里因為第一次執行沒有,所以直接passexcept Exception:passprediction_value = sess.run(prediction, feed_dict={xs: x_data})# plot the predictionlines = ax.plot(x_data, prediction_value, 'r-', lw=3) # 繪畫plt.pause(0.1) # 停0.1s``` ![enter description here][3]## 四、TensorFlow可視化### 1、TensorFlow的可視化工具`tensorboard`,可視化神經網路額結構- 輸入`input`
with tf.name_scope(‘input’):
xs = tf.placeholder(tf.float32,[None,1],name=’x_in’) #
ys = tf.placeholder(tf.float32,[None,1],name=’y_in’)
| 123 | ![enter description here][4]- `layer`層 |
def add_layer(inputs,in_size,out_size,activation_function=None):
with tf.name_scope(‘layer’):
with tf.name_scope(‘Weights’):
Weights = tf.Variable(tf.random_normal([in_size,out_size]),name=’W’)
with tf.name_scope(‘biases’):
biases = tf.Variable(tf.zeros([1,out_size]) + 0.1,name=’b’)
with tf.name_scope(‘Ws_plus_b’):
Ws_plus_b = tf.matmul(inputs,Weights) + biases
if activation_function is None: outputs = Ws_plus_b
else:
outputs = activation_function(Ws_plus_b)
return outputs
| 123 | ![enter description here][5]- `loss`和`train` |
with tf.name_scope(‘loss’):
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction),reduction_indices=1))
with tf.name_scope(‘train’):
train = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
| 123 | ![enter description here][6]- 寫入文件中 |
writer = tf.train.SummaryWriter(“logs/“, sess.graph)
| 12345678 | - 瀏覽器中查看(chrome瀏覽器)- 在終端輸入:`tensorboard --logdir='logs/'`,它會給出訪問地址- 瀏覽器中查看即可。- `tensorboard`命令在安裝**python**目錄的**bin**目錄下,可以創建一個軟連接### 2、可視化訓練過程- 可視化Weights權重和biases偏置- 每一層起個名字 |
layer_name = ‘layer%s’%n_layer
| 1 | - tf.histogram_summary(name,value) |
def add_layer(inputs,in_size,out_size,n_layer,activation_function=None):
layer_name = ‘layer%s’%n_layer
with tf.name_scope(layer_name):
with tf.name_scope(‘Weights’):
Weights = tf.Variable(tf.random_normal([in_size,out_size]),name=’W’)
tf.histogram_summary(layer_name+’/weights’, Weights)
with tf.name_scope(‘biases’):
biases = tf.Variable(tf.zeros([1,out_size]) + 0.1,name=’b’)
tf.histogram_summary(layer_name+’/biases’,biases)
with tf.name_scope(‘Ws_plus_b’):
Ws_plus_b = tf.matmul(inputs,Weights) + biases
| 1 | - merge所有的summary |
merged =tf.merge_all_summaries()
| 1 | - 寫入文件中 |
writer = tf.train.SummaryWriter(“logs/“, sess.graph)
| 1 | - 訓練1000次,每50步顯示一次: |
for i in range(1000):
sess.run(train,feed_dict={xs:x_data,ys:y_data})
if i%50==0:
summary = sess.run(merged, feed_dict={xs: x_data, ys:y_data})
writer.add_summary(summary, i)
| 12345678910111213141516 | - 同樣適用`tensorboard`查看 ![enter description here][7] - 可視化損失函數(代價函數)- 添加:`tf.scalar_summary('loss',loss)` ![enter description here][8]## 五、手寫數字識別_1### 1、說明- [全部代碼](https://github.com/lawlite19/MachineLearning_TensorFlow/blob/master/Mnist_01/mnist.py):`https://github.com/lawlite19/MachineLearning_TensorFlow/blob/master/Mnist_02/mnist.py`- 自己的數據集,沒有使用tensorflow中mnist數據集,- 之前在機器學習中用Python實現過,地址:`https://github.com/lawlite19/MachineLearning_Python`,這里使用`tensorflow`實現- 神經網絡只有兩層### 2、代碼實現- 添加一層 |
‘’’添加一層神經網絡’’’
def add_layer(inputs,in_size,out_size,activation_function=None):
Weights = tf.Variable(tf.random_normal([in_size,out_size])) # 權重,in*out
biases = tf.Variable(tf.zeros([1,out_size]) + 0.1)
Ws_plus_b = tf.matmul(inputs,Weights) + biases # 計算權重和偏置之后的值
if activation_function is None:
outputs = Ws_plus_b
else:
outputs = activation_function(Ws_plus_b) # 調用激勵函數運算
return outputs
| 1 | - 運行函數 |
‘’’運行函數’’’
def NeuralNetwork():
data_digits = spio.loadmat(‘data_digits.mat’)
X = data_digits[‘X’]
y = data_digits[‘y’]
m,n = X.shape
class_y = np.zeros((m,10)) # y是0,1,2,3…9,需要映射0/1形式
for i in range(10):
class_y[:,i] = np.float32(y==i).reshape(1,-1)
| 12 | - 計算準確度 |
‘’’計算預測準確度’’’
def compute_accuracy(xs,ys,X,y,sess,prediction):
y_pre = sess.run(prediction,feed_dict={xs:X})
correct_prediction = tf.equal(tf.argmax(y_pre,1),tf.argmax(y,1)) #tf.argmax 給出某個tensor對象在某一維上的其數據最大值所在的索引值,即為對應的數字,tf.equal 來檢測我們的預測是否真實標簽匹配
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) # 平均值即為準確度
result = sess.run(accuracy,feed_dict={xs:X,ys:y})
return result
| 1234567891011 | - 輸出每一次預測的結果準確度 ![enter description here][9]## 六、手寫數字識別_2### 1、說明- [全部代碼](https://github.com/lawlite19/MachineLearning_TensorFlow/blob/master/Mnist_02/mnist.py):`https://github.com/lawlite19/MachineLearning_TensorFlow/blob/master/Mnist_02/mnist.py`- 采用TensorFlow中的mnist數據集(可以取網站下載它的數據集,http://yann.lecun.com/exdb/mnist/)- 實現代碼與上面類似,它有專門的測試集### 2、代碼- 隨機梯度下降`SGD`,每次選出`100`個數據進行訓練 |
for i in range(2000):
batch_xs, batch_ys = minist.train.next_batch(100)
sess.run(train_step,feed_dict={xs:batch_xs,ys:batch_ys})
if i%50==0:
print(compute_accuracy(xs,ys,minist.test.images, minist.test.labels,sess,prediction))
| 1234567891011121314 | - 輸出每一次預測的結果準確度 ![enter description here][10]## 七、手寫數字識別_3_CNN卷積神經網絡### 1、說明- 關于**卷積神經網絡CNN**可以查看[我的博客](http://blog.csdn.net/u013082989/article/details/53673602):http://blog.csdn.net/u013082989/article/details/53673602- 或者[github](https://github.com/lawlite19/DeepLearning_Python):https://github.com/lawlite19/DeepLearning_Python- [全部代碼](https://github.com/lawlite19/MachineLearning_TensorFlow/blob/master/Mnist_03_CNN/mnist_cnn.py):`https://github.com/lawlite19/MachineLearning_TensorFlow/blob/master/Mnist_03_CNN/mnist_cnn.py`- 采用TensorFlow中的mnist數據集(可以取網站下載它的數據集,http://yann.lecun.com/exdb/mnist/)### 2、代碼實現- 權重和偏置初始化函數- 權重使用的`truncated_normal`進行初始化,`stddev`標準差定義為0.1- 偏置初始化為常量0.1 |
‘’’權重初始化函數’’’
def weight_variable(shape):
inital = tf.truncated_normal(shape, stddev=0.1) # 使用truncated_normal進行初始化
return tf.Variable(inital)
‘’’偏置初始化函數’’’
def bias_variable(shape):
inital = tf.constant(0.1,shape=shape) # 偏置定義為常量
return tf.Variable(inital)
| 1234 | - 卷積函數- `strides[0]`和`strides[3]`的兩個1是默認值,中間兩個1代表padding時在x方向運動1步,y方向運動1步- `padding='SAME'`代表經過卷積之后的輸出圖像和原圖像大小一樣 |
‘’’卷積函數’’’
def conv2d(x,W):#x是圖片的所有參數,W是此卷積層的權重
return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding=’SAME’)#strides[0]和strides3的兩個1是默認值,中間兩個1代表padding時在x方向運動1步,y方向運動1步
| 12345 | - 池化函數- `ksize`指定池化核函數的大小- 根據池化核函數的大小定義`strides`的大小 |
‘’’池化函數’’’
def max_pool_2x2(x):
return tf.nn.max_pool(x,ksize=[1,2,2,1],
strides=[1,2,2,1], padding=’SAME’)#池化的核函數大小為2x2,因此ksize=[1,2,2,1],步長為2,因此strides=[1,2,2,1]
| 1234 | - 加載`mnist`數據和定義`placeholder`- 輸入數據`x_image`最后一個`1`代表`channel`的數量,若是`RGB`3個顏色通道就定義為3- `keep_prob` 用于**dropout**防止過擬合 |
| 123 | - 第一層卷積和池化- 使用**ReLu**激活函數 |
| 123 | - 第二層卷積和池化 |
| 123 | - 全連接第一層 |
| 123 | - `dropout`防止過擬合 |
| 1234 | - 最后一層全連接預測,使用梯度下降優化**交叉熵損失函數**- 使用**softmax**分類器分類 |
| 123 | - 定義Session,使用`SGD`訓練 |
| 12 | - 計算準確度函數- 和上面的兩個計算準確度的函數一致,就是多了個**dropout**的參數`keep_prob` |
‘’’計算準確度函數’’’
def compute_accuracy(xs,ys,X,y,keep_prob,sess,prediction):
y_pre = sess.run(prediction,feed_dict={xs:X,keep_prob:1.0}) # 預測,這里的keep_prob是dropout時用的,防止過擬合
correct_prediction = tf.equal(tf.argmax(y_pre,1),tf.argmax(y,1)) #tf.argmax 給出某個tensor對象在某一維上的其數據最大值所在的索引值,即為對應的數字,tf.equal 來檢測我們的預測是否真實標簽匹配
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) # 平均值即為準確度
result = sess.run(accuracy,feed_dict={xs:X,ys:y,keep_prob:1.0})
return result
| 12345678910111213 | ### 3、運行結果- 測試集上準確度 ![enter description here][11] - 使用`top`命令查看占用的CPU和內存,還是很消耗CPU和內存的,所以上面只輸出了四次我就終止了![enter description here][12] - 由于我在虛擬機里運行的`TensorFlow`程序,分配了`5G`的內存,若是內存不夠會報一個錯誤。-------------------------------------------------------------## 八、保存和提取神經網絡### 1、保存- 定義要保存的數據 |
W = tf.Variable(initial_value=[[1,2,3],[3,4,5]],
name=’weights’, dtype=tf.float32) # 注意需要指定name和dtype
b = tf.Variable(initial_value=[1,2,3],
name=’biases’, dtype=tf.float32)
init = tf.initialize_all_variables()
| 1 | - 保存 |
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
save_path = saver.save(sess, ‘my_network/save_net.ckpt’) # 保存目錄,注意要在當前項目下建立my_network的目錄
print (‘保存到 :’,save_path)
| 12 | ### 2、提取- 定義數據 |
W = tf.Variable(np.arange(6).reshape((2,3)),
name=’weights’, dtype=tf.float32) # 注意與之前保存的一致
b = tf.Variable(np.arange((3)),
name=’biases’, dtype=tf.float32)
| 1 | - `restore`提取 |
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess,’my_network/save_net.ckpt’)
print(‘weights:’,sess.run(W)) # 輸出一下結果
print(‘biases:’,sess.run(b))
| 12345678910111213141516171819202122 | -------------------------------------------------- 以下來自`tensorflow-turorial`,使用`python3.5`## 九、線性模型Linear Model- [全部代碼][13]- 使用`MNIST`數據集### 1、加載MNIST數據集,并輸出信息``` stylus'''Load MNIST data and print some information'''data = input_data.read_data_sets("MNIST_data", one_hot = True)print("Size of:")print("\t training-set:\t\t{}".format(len(data.train.labels)))print("\t test-set:\t\t\t{}".format(len(data.test.labels)))print("\t validation-set:\t{}".format(len(data.validation.labels)))print(data.test.labels[0:5])data.test.cls = np.array([label.argmax() for label in data.test.labels]) # get the actual valueprint(data.test.cls[0:5]) |
2、繪制9張圖像
-
實現函數
1234567891011121314151617181920 '''define a funciton to plot 9 images'''def plot_images(images, cls_true, cls_pred = None):'''@parameter images: the images info@parameter cls_true: the true value of image@parameter cls_pred: the prediction value, default is None'''assert len(images) == len(cls_true) == 9 # only show 9 imagesfig, axes = plt.subplots(nrows=3, ncols=3)for i, ax in enumerate(axes.flat):ax.imshow(images[i].reshape(img_shape), cmap="binary") # binary means black_white image# show the true and pred valuesif cls_pred is None:xlabel = "True: {0}".format(cls_true[i])else:xlabel = "True: {0},Pred: {1}".format(cls_true[i],cls_pred[i])ax.set_xlabel(xlabel)ax.set_xticks([]) # remove the ticksax.set_yticks([])plt.show() -
選擇測試集中的9張圖顯示
| 1234567891011121314 | '''show 9 images'''images = data.test.images[0:9]cls_true = data.test.cls[0:9]plot_images(images, cls_true)``` ![enter description here][14]### 3、定義要訓練的模型- 定義`placeholder```` stylus'''define the placeholder'''X = tf.placeholder(tf.float32, [None, img_size_flat]) # None means the arbitrary number of labels, the features size is img_size_flat y_true = tf.placeholder(tf.float32, [None, num_classes]) # output size is num_classesy_true_cls = tf.placeholder(tf.int64, [None]) |
- 定義weights和biases
| 123 | '''define weights and biases'''weights = tf.Variable(tf.zeros([img_size_flat, num_classes])) # img_size_flat*num_classesbiases = tf.Variable(tf.zeros([num_classes])) |
- 定義模型
| 123456789 | '''define the model'''logits = tf.matmul(X,weights) + biases y_pred = tf.nn.softmax(logits)y_pred_cls = tf.argmax(y_pred, dimension=1)cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=logits)cost = tf.reduce_mean(cross_entropy)'''define the optimizer'''optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cost) |
- 定義求準確度
| 123 | '''define the accuracy'''correct_prediction = tf.equal(y_pred_cls, y_true_cls)accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) |
- 定義session
| 1234 | '''run the datagraph and use batch gradient descent'''session = tf.Session()session.run(tf.global_variables_initializer())batch_size = 100 |
4、定義函數optimize進行bgd訓練
| 123456789 | '''define a function to run the optimizer'''def optimize(num_iterations):'''@parameter num_iterations: the traning times'''for i in range(num_iterations):x_batch, y_true_batch = data.train.next_batch(batch_size)feed_dict_train = {X: x_batch,y_true: y_true_batch}session.run(optimizer, feed_dict=feed_dict_train) |
5、定義輸出準確度的函數
-
代碼
1234567 feed_dict_test = {X: data.test.images, y_true: data.test.labels, y_true_cls: data.test.cls} '''define a function to print the accuracy''' def print_accuracy():acc = session.run(accuracy, feed_dict=feed_dict_test)print("Accuracy on test-set:{0:.1%}".format(acc)) -
輸出:Accuracy on test-set:89.4%
6、定義繪制錯誤預測的圖片函數
-
代碼
12345678 '''define a function to plot the error prediciton''' def plot_example_errors():correct, cls_pred = session.run([correct_prediction, y_pred_cls], feed_dict=feed_dict_test) incorrect = (correct == False)images = data.test.images[incorrect] # get the prediction error imagescls_pred = cls_pred[incorrect] # get prediction valuecls_true = data.test.cls[incorrect] # get true valueplot_images(images[0:9], cls_true[0:9], cls_pred[0:9]) -
輸出:
7、定義可視化權重的函數
-
代碼
123456789101112131415 '''define a fucntion to plot weights'''def plot_weights():w = session.run(weights)w_min = np.min(w)w_max = np.max(w)fig, axes = plt.subplots(3, 4)fig.subplots_adjust(0.3, 0.3)for i, ax in enumerate(axes.flat):if i<10:image = w[:,i].reshape(img_shape)ax.set_xlabel("Weights: {0}".format(i))ax.imshow(image, vmin=w_min,vmax=w_max,cmap="seismic")ax.set_xticks([])ax.set_yticks([])plt.show() -
輸出:
8、定義輸出confusion_matrix的函數
-
代碼:
12345678910111213141516 '''define a function to printand plot the confusion matrix using scikit-learn.''' def print_confusion_martix():cls_true = data.test.cls # test set actual value cls_pred = session.run(y_pred_cls, feed_dict=feed_dict_test) # test set predict valuecm = confusion_matrix(y_true=cls_true,y_pred=cls_pred) # use sklearn confusion_matrixprint(cm)plt.imshow(cm, interpolation='nearest',cmap=plt.cm.Blues) # Plot the confusion matrix as an image.plt.tight_layout()plt.colorbar()tick_marks = np.arange(num_classes)tick_marks = np.arange(num_classes)plt.xticks(tick_marks, range(num_classes))plt.yticks(tick_marks, range(num_classes))plt.xlabel('Predicted')plt.ylabel('True') plt.show() -
輸出:
十:CNN
- 全部代碼
- 使用MNIST數據集
- 加載數據,繪制9張圖等函數與上面一致,readme中不再寫出
1、定義CNN所需要的變量
| 123456 | '''define cnn description'''filter_size1 = 5 # the first conv filter size is 5x5 num_filters1 = 32 # there are 32 filtersfilter_size2 = 5 # the second conv filter sizenum_filters2 = 64 # there are 64 filtersfc_size = 1024 # fully-connected layer |
2、初始化weights和biases的函數
| 123456789101112 | '''define a function to intialize weights'''def initialize_weights(shape):'''@param shape:the shape of weights'''return tf.Variable(tf.truncated_normal(shape=shape, stddev=0.1))'''define a function to intialize biases'''def initialize_biases(length):'''@param length: the length of biases, which is a vector'''return tf.Variable(tf.constant(0.1,shape=[length])) |
3、定義卷積操作和池化(如果使用的話)的函數
| 12345678910111213141516171819202122232425 | '''define a function to do conv and pooling if used'''def conv_layer(input, num_input_channels,filter_size,num_output_filters,use_pooling=True):'''@param input: the input of previous layer's output@param num_input_channels: input channels@param filter_size: the weights filter size@param num_output_filters: the output number channels@param use_pooling: if use pooling operation'''shape = [filter_size, filter_size, num_input_channels, num_output_filters]weights = initialize_weights(shape=shape)biases = initialize_biases(length=num_output_filters) # one for each filterlayer = tf.nn.conv2d(input=input, filter=weights, strides=[1,1,1,1], padding='SAME')layer += biasesif use_pooling:layer = tf.nn.max_pool(value=layer,ksize=[1,2,2,1],strides=[1,2,2,1],padding="SAME") # the kernel function size is 2x2,so the ksize=[1,2,2,1]layer = tf.nn.relu(layer)return layer, weights |
4、定義將卷積層展開的函數
| 123456789 | '''define a function to flat conv layer'''def flatten_layer(layer):'''@param layer: the conv layer'''layer_shape = layer.get_shape() # get the shape of the layer(layer_shape == [num_images, img_height, img_width, num_channels])num_features = layer_shape[1:4].num_elements() # [1:4] means the last three demension, namely the flatten sizelayer_flat = tf.reshape(layer, [-1, num_features]) # reshape to flat,-1 means don't care about the number of imagesreturn layer_flat, num_features |
5、定義全連接層的函數
| 1234567891011121314 | '''define a function to do fully-connected'''def fc_layer(input, num_inputs, num_outputs, use_relu=True):'''@param input: the input@param num_inputs: the input size@param num_outputs: the output size@param use_relu: if use relu activation function'''weights = initialize_weights(shape=[num_inputs, num_outputs])biases = initialize_biases(num_outputs)layer = tf.matmul(input, weights) + biasesif use_relu:layer = tf.nn.relu(layer)return layer |
6、定義模型
- 定義placeholder
| 123456 | '''define the placeholder'''X = tf.placeholder(tf.float32, shape=[None, img_flat_size], name="X")X_image = tf.reshape(X, shape=[-1, img_size, img_size, num_channels]) # reshape to the image shapey_true = tf.placeholder(tf.float32, [None, num_classes], name="y_true")y_true_cls = tf.argmax(y_true, axis=1)keep_prob = tf.placeholder(tf.float32) # drop out placeholder |
- 定義卷積、dropout、和全連接
| 123456789101112131415161718192021222324 | '''define the cnn model'''layer_conv1, weights_conv1 = conv_layer(input=X_image, num_input_channels=num_channels, filter_size=filter_size1, num_output_filters=num_filters1,use_pooling=True)print("conv1:",layer_conv1)layer_conv2, weights_conv2 = conv_layer(input=layer_conv1, num_input_channels=num_filters1, filter_size=filter_size2,num_output_filters=num_filters2,use_pooling=True)print("conv2:",layer_conv2)layer_flat, num_features = flatten_layer(layer_conv2) # the num_feature is 7x7x36=1764print("flatten layer:", layer_flat) layer_fc1 = fc_layer(layer_flat, num_features, fc_size, use_relu=True)print("fully-connected layer1:", layer_fc1)layer_drop_out = tf.nn.dropout(layer_fc1, keep_prob) # dropout operationlayer_fc2 = fc_layer(layer_drop_out, fc_size, num_classes,use_relu=False)print("fully-connected layer2:", layer_fc2)y_pred = tf.nn.softmax(layer_fc2)y_pred_cls = tf.argmax(y_pred, axis=1)cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=layer_fc2)cost = tf.reduce_mean(cross_entropy)optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(cost) # use AdamOptimizer優化 |
- 定義求準確度
| 123 | '''define accuracy'''correct_prediction = tf.equal(y_true_cls, y_pred_cls)accuracy = tf.reduce_mean(tf.cast(correct_prediction,dtype=tf.float32)) |
7、定義訓練的函數optimize,使用bgd
-
代碼:
1234567891011121314151617181920 '''define a function to run train the model with bgd'''total_iterations = 0 # record the total iterationsdef optimize(num_iterations):'''@param num_iterations: the total interations of train batch_size operation'''global total_iterationsstart_time = time.time()for i in range(total_iterations,total_iterations + num_iterations):x_batch, y_batch = data.train.next_batch(batch_size)feed_dict = {X: x_batch, y_true: y_batch, keep_prob: 0.5}session.run(optimizer, feed_dict=feed_dict)if i % 10 == 0:acc = session.run(accuracy, feed_dict=feed_dict)msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}" # {:>6}means the fixed width,{1:>6.1%}means the fixed width is 6 and keep 1 decimal place print(msg.format(i + 1, acc))total_iterations += num_iterationsend_time = time.time()time_dif = end_time-start_timeprint("time usage:"+str(timedelta(seconds=int(round(time_dif))))) -
輸出:
| 123456789101112131415161718192021222324252627282930313233343536 | Optimization Iteration: 651, Training Accuracy: 99.0%Optimization Iteration: 661, Training Accuracy: 99.0%Optimization Iteration: 671, Training Accuracy: 99.0%Optimization Iteration: 681, Training Accuracy: 99.0%Optimization Iteration: 691, Training Accuracy: 99.0%Optimization Iteration: 701, Training Accuracy: 99.0%Optimization Iteration: 711, Training Accuracy: 99.0%Optimization Iteration: 721, Training Accuracy: 99.0%Optimization Iteration: 731, Training Accuracy: 99.0%Optimization Iteration: 741, Training Accuracy: 100.0%Optimization Iteration: 751, Training Accuracy: 99.0%Optimization Iteration: 761, Training Accuracy: 99.0%Optimization Iteration: 771, Training Accuracy: 97.0%Optimization Iteration: 781, Training Accuracy: 96.0%Optimization Iteration: 791, Training Accuracy: 98.0%Optimization Iteration: 801, Training Accuracy: 100.0%Optimization Iteration: 811, Training Accuracy: 100.0%Optimization Iteration: 821, Training Accuracy: 97.0%Optimization Iteration: 831, Training Accuracy: 98.0%Optimization Iteration: 841, Training Accuracy: 99.0%Optimization Iteration: 851, Training Accuracy: 99.0%Optimization Iteration: 861, Training Accuracy: 99.0%Optimization Iteration: 871, Training Accuracy: 96.0%Optimization Iteration: 881, Training Accuracy: 99.0%Optimization Iteration: 891, Training Accuracy: 99.0%Optimization Iteration: 901, Training Accuracy: 98.0%Optimization Iteration: 911, Training Accuracy: 99.0%Optimization Iteration: 921, Training Accuracy: 99.0%Optimization Iteration: 931, Training Accuracy: 99.0%Optimization Iteration: 941, Training Accuracy: 98.0%Optimization Iteration: 951, Training Accuracy: 100.0%Optimization Iteration: 961, Training Accuracy: 99.0%Optimization Iteration: 971, Training Accuracy: 98.0%Optimization Iteration: 981, Training Accuracy: 99.0%Optimization Iteration: 991, Training Accuracy: 100.0%time usage:0:07:07 |
8、定義批量預測的函數,方便輸出訓練錯的圖像
| 123456789101112131415161718192021222324252627 | batch_size_test = 256def print_test_accuracy(print_error=False,print_confusion_matrix=False):'''@param print_error: whether plot the error images@param print_confusion_matrix: whether plot the confusion_matrix'''num_test = len(data.test.images) cls_pred = np.zeros(shape=num_test, dtype=np.int) # declare the cls_predi = 0#predict the test set using batch_sizewhile i < num_test:j = min(i + batch_size_test, num_test)images = data.test.images[i:j,:]labels = data.test.labels[i:j,:]feed_dict = {X:images,y_true:labels,keep_prob:0.5}cls_pred[i:j] = session.run(y_pred_cls,feed_dict=feed_dict)i = jcls_true = data.test.clscorrect = (cls_true == cls_pred)correct_sum = correct.sum() # correct predictionsacc = float(correct_sum)/num_testmsg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"print(msg.format(acc, correct_sum, num_test)) if print_error:plot_error_pred(cls_pred,correct)if print_confusion_matrix:plot_confusin_martrix(cls_pred) |
9、定義可視化卷積核權重的函數
-
代碼:
12345678910111213141516171819 '''define a function to plot conv weights'''def plot_conv_weights(weights,input_channel=0):'''@param weights: the conv filter weights, for example: the weights_conv1 and weights_conv2, which are 4 dimension [filter_size, filter_size, num_input_channels, num_output_filters]@param input_channel: the input_channels'''w = session.run(weights)w_min = np.min(w)w_max = np.max(w)num_filters = w.shape[3] # get the number of filtersnum_grids = math.ceil(math.sqrt(num_filters))fig, axes = plt.subplots(num_grids, num_grids)for i, ax in enumerate(axes.flat):if i < num_filters:img = w[:,:,input_channel,i] # the ith weightax.imshow(img,vmin=w_min,vmax=w_max,interpolation="nearest",cmap='seismic')ax.set_xticks([])ax.set_yticks([])plt.show() -
輸出:
- 第一層:
- 第二層:
10、定義可視化卷積層輸出的函數
- 第一層:
-
代碼:
123456789101112131415161718 '''define a function to plot conv output layer'''def plot_conv_layer(layer, image):'''@param layer: the conv layer, which is also a image after conv@param image: the image info'''feed_dict = {X:[image]}values = session.run(layer, feed_dict=feed_dict)num_filters = values.shape[3] # get the number of filtersnum_grids = math.ceil(math.sqrt(num_filters))fig, axes = plt.subplots(num_grids,num_grids)for i, ax in enumerate(axes.flat):if i < num_filters:img = values[0,:,:,i]ax.imshow(img, interpolation="nearest",cmap="binary")ax.set_xticks([])ax.set_yticks([])plt.show() -
輸出:
- 第一層:
- 第二層:
- 第一層:
十一:使用prettytensor實現CNNModel
- 全部代碼
- 使用MNIST數據集
- 加載數據,繪制9張圖等函數與九一致,readme中不再寫出
1、定義模型
- 定義placeholder,與之前的一致
| 12345 | '''declare the placeholder'''X = tf.placeholder(tf.float32, [None, img_flat_size], name="X")X_img = tf.reshape(X, shape=[-1,img_size,img_size, num_channels])y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name="y_true")y_true_cls = tf.argmax(y_true,1) |
- 使用prettytensor實現CNN模型
| 1234567891011 | '''define the cnn model with prettytensor'''x_pretty = pt.wrap(X_img)with pt.defaults_scope(): # or pt.defaults_scope(activation_fn=tf.nn.relu) if just use one activation functiony_pred, loss = x_pretty.\conv2d(kernel=5, depth=16, activation_fn=tf.nn.relu, name="conv_layer1").\max_pool(kernel=2, stride=2).\conv2d(kernel=5, depth=36, activation_fn=tf.nn.relu, name="conv_layer2").\max_pool(kernel=2, stride=2).\flatten().\fully_connected(size=128, activation_fn=tf.nn.relu, name="fc_layer1").\softmax_classifier(num_classes=num_classes, labels=y_true) |
- 獲取卷積核的權重(后續可視化)
| 1234567 | '''define a function to get weights'''def get_weights_variable(layer_name):with tf.variable_scope(layer_name, reuse=True):variable = tf.get_variable("weights")return variableconv1_weights = get_weights_variable("conv_layer1")conv2_weights = get_weights_variable("conv_layer2") |
- 定義optimizer訓練,和之前的一樣了
| 1234567 | '''define optimizer to train'''optimizer = tf.train.AdamOptimizer().minimize(loss)y_pred_cls = tf.argmax(y_pred,1)correct_prediction = tf.equal(y_pred_cls, y_true_cls)accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))session = tf.Session()session.run(tf.global_variables_initializer()) |
十二:CNN,保存和加載模型,使用Early Stopping
- 全部代碼
- 使用MNIST數據集
- 加載數據,繪制9張圖等函數與九一致,readme中不再寫出
- CNN模型的定義和十一中的一致,readme中不再寫出
1、保存模型
- 創建saver,和保存的目錄
| 123456 | '''define a Saver to save the network'''saver = tf.train.Saver()save_dir = "checkpoints/"if not os.path.exists(save_dir):os.makedirs(save_dir)save_path = os.path.join(save_dir, 'best_validation') |
- 保存session,對應到下面2中的Early Stopping,將最好的模型保存
| 1 | saver.save(sess=session, save_path=save_path) |
2、Early Stopping
| 123456789101112131415161718192021222324252627282930313233343536 | '''declear the train info'''train_batch_size = 64best_validation_accuracy = 0.0last_improvement = 0require_improvement_iterations = 1000total_iterations = 0'''define a function to optimize the optimizer'''def optimize(num_iterations):global total_iterationsglobal best_validation_accuracyglobal last_improvementstart_time = time.time()for i in range(num_iterations):total_iterations += 1X_batch, y_true_batch = data.train.next_batch(train_batch_size)feed_dict_train = {X: X_batch,y_true: y_true_batch}session.run(optimizer, feed_dict=feed_dict_train)if (total_iterations%100 == 0) or (i == num_iterations-1):acc_train = session.run(accuracy, feed_dict=feed_dict_train)acc_validation, _ = validation_accuracy()if acc_validation > best_validation_accuracy:best_validation_accuracy = acc_validationlast_improvement = total_iterationssaver.save(sess=session, save_path=save_path)improved_str = "*"else:improved_str = ""msg = "Iter: {0:>6}, Train_batch accuracy:{1:>6.1%}, validation acc:{2:>6.1%} {3}"print(msg.format(i+1, acc_train, acc_validation, improved_str))if total_iterations-last_improvement > require_improvement_iterations:print('No improvement found in a while, stop running')breakend_time = time.time()time_diff = end_time-start_timeprint("Time usage:" + str(timedelta(seconds=int(round(time_diff))))) |
- 調用optimize(10000)輸出信息
| 12345678910111213141516171819 | Iter: 5100, Train_batch accuracy:100.0%, validation acc: 98.8% *Iter: 5200, Train_batch accuracy:100.0%, validation acc: 98.3% Iter: 5300, Train_batch accuracy:100.0%, validation acc: 98.7% Iter: 5400, Train_batch accuracy: 98.4%, validation acc: 98.6% Iter: 5500, Train_batch accuracy: 98.4%, validation acc: 98.6% Iter: 5600, Train_batch accuracy:100.0%, validation acc: 98.7% Iter: 5700, Train_batch accuracy: 96.9%, validation acc: 98.9% *Iter: 5800, Train_batch accuracy:100.0%, validation acc: 98.6% Iter: 5900, Train_batch accuracy:100.0%, validation acc: 98.6% Iter: 6000, Train_batch accuracy: 98.4%, validation acc: 98.7% Iter: 6100, Train_batch accuracy:100.0%, validation acc: 98.7% Iter: 6200, Train_batch accuracy:100.0%, validation acc: 98.7% Iter: 6300, Train_batch accuracy: 98.4%, validation acc: 98.8% Iter: 6400, Train_batch accuracy: 98.4%, validation acc: 98.8% Iter: 6500, Train_batch accuracy:100.0%, validation acc: 98.7% Iter: 6600, Train_batch accuracy:100.0%, validation acc: 98.7% Iter: 6700, Train_batch accuracy:100.0%, validation acc: 98.8% No improvement found in a while, stop runningTime usage:0:18:43 |
可以看到最后10次輸出(每100次輸出一次)在驗證集上準確度都沒有提高,停止執行
3、 小批量預測并計算準確率
-
因為需要預測測試集和驗證集,這里參數指定需要的images
1234567891011121314 '''define a function to predict using batch'''batch_size_predict = 256def predict_cls(images, labels, cls_true):num_images = len(images)cls_pred = np.zeros(shape=num_images, dtype=np.int)i = 0while i < num_images:j = min(i+batch_size_predict, num_images)feed_dict = {X: images[i:j,:],y_true: labels[i:j,:]}cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)i = jcorrect = (cls_true==cls_pred)return correct, cls_pred -
測試集和驗證集直接調用即可
| 12345 | def predict_cls_test():return predict_cls(data.test.images, data.test.labels, data.test.cls)def predict_cls_validation():return predict_cls(data.validation.images, data.validation.labels, data.validation.cls) |
- 計算驗證集準確率(上面optimize函數中需要用到)
| 123456789 | '''calculate the acc'''def cls_accuracy(correct):correct_sum = correct.sum()acc = float(correct_sum)/len(correct)return acc, correct_sum'''define a function to calculate the validation acc'''def validation_accuracy():correct, _ = predict_cls_validation()return cls_accuracy(correct) |
- 計算測試集準確率,并且輸出錯誤的預測和confusion matrix
| 123456789101112131415161718 | '''define a function to calculate test acc'''def print_test_accuracy(show_example_errors=False,show_confusion_matrix=False):correct, cls_pred = predict_cls_test()acc, num_correct = cls_accuracy(correct)num_images = len(correct)msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"print(msg.format(acc, num_correct, num_images))# Plot some examples of mis-classifications, if desired.if show_example_errors:print("Example errors:")plot_example_errors(cls_pred=cls_pred, correct=correct)# Plot the confusion matrix, if desired.if show_confusion_matrix:print("Confusion Matrix:")plot_confusion_matrix(cls_pred=cls_pred) |
十二:模型融合
- 全部代碼
- 使用MNIST數據集
- 一些方法和之前的一致,不在給出
- 其中訓練了多個CNN 模型,然后取預測的平均值作為最后的預測結果
1、將測試集和驗證集合并后,并重新劃分
- 主要是希望訓練時數據集有些變換,否則都是一樣的數據去訓練了,最后再融合意義不大
12345678910111213141516171819 '''將training set和validation set合并,并重新劃分'''combine_images = np.concatenate([data.train.images, data.validation.images], axis=0)combine_labels = np.concatenate([data.train.labels, data.validation.labels], axis=0)print("合并后圖片:", combine_images.shape)print("合并后label:", combine_labels.shape)combined_size = combine_labels.shape[0]train_size = int(0.8*combined_size)validation_size = combined_size - train_size'''函數:將合并后的重新隨機劃分'''def random_training_set():idx = np.random.permutation(combined_size) # 將0-combined_size數字隨機排列idx_train = idx[0:train_size]idx_validation = idx[train_size:]x_train = combine_images[idx_train, :]y_train = combine_labels[idx_train, :] x_validation = combine_images[idx_validation, :]y_validation = combine_images[idx_validation, :]return x_train, y_train, x_validation, y_validation
2、融合模型
-
加載訓練好的模型,并輸出每個模型在測試集的預測結果等
1234567891011121314151617 def ensemble_predictions():pred_labels = []test_accuracies = []validation_accuracies = []for i in range(num_networks):saver.restore(sess=session, save_path=get_save_path(i))test_acc = test_accuracy()test_accuracies.append(test_acc)validation_acc = validation_accuracy()validation_accuracies.append(validation_acc)msg = "網絡:{0},驗證集:{1:.4f},測試集{2:.4f}"print(msg.format(i, validation_acc, test_acc))pred = predict_labels(data.test.images)pred_labels.append(pred)return np.array(pred_labels),\np.array(test_accuracies),\np.array(validation_accuracies) -
調用pred_labels, test_accuracies, val_accuracies = ensemble_predictions()
- 取均值:ensemble_pred_labels = np.mean(pred_labels, axis=0)
- 融合后的真實結果:ensemble_cls_pred = np.argmax(ensemble_pred_labels, axis=1)
- 其他一些信息:
| 12345678910111213141516 | ensemble_correct = (ensemble_cls_pred == data.test.cls)ensemble_incorrect = np.logical_not(ensemble_correct)print(test_accuracies)best_net = np.argmax(test_accuracies)print(best_net)print(test_accuracies[best_net])best_net_pred_labels = pred_labels[best_net, :, :]best_net_cls_pred = np.argmax(best_net_pred_labels, axis=1)best_net_correct = (best_net_cls_pred == data.test.cls)best_net_incorrect = np.logical_not(best_net_correct)print("融合后預測對的:", np.sum(ensemble_correct))print("單個最好模型預測對的", np.sum(best_net_correct))ensemble_better = np.logical_and(best_net_incorrect, ensemble_correct) # 融合之后好于單個的個數print(ensemble_better.sum())best_net_better = np.logical_and(best_net_correct, ensemble_incorrect) # 單個好于融合之后的個數print(best_net_better.sum()) |
十二:Cifar-10數據集,使用variable_scope重復使用變量
- 全部代碼
- 使用CIFAR-10數據集
- 創建了兩個網絡,一個用于訓練,一個用于測試,測試使用的是訓練好的權重參數,所以用到參數重用
- 網絡結構
1、數據集
-
導入包:
- 這是別人實現好的下載和處理cifar-10數據集的diamante
12 import cifar10from cifar10 import img_size, num_channels, num_classes
- 這是別人實現好的下載和處理cifar-10數據集的diamante
-
輸出一些數據集信息
| 123456789101112 | '''下載cifar10數據集, 大概163M'''cifar10.maybe_download_and_extract()'''加載數據集'''images_train, cls_train, labels_train = cifar10.load_training_data()images_test, cls_test, labels_test = cifar10.load_test_data()'''打印一些信息'''class_names = cifar10.load_class_names()print(class_names)print("Size of:")print("training set:\t\t{}".format(len(images_train)))print("test set:\t\t\t{}".format(len(images_test))) |
- 顯示9張圖片函數
- 相比之前的,加入了smooth
| 123456789101112131415161718192021 | '''顯示9張圖片函數'''def plot_images(images, cls_true, cls_pred=None, smooth=True): # smooth是否平滑顯示assert len(images) == len(cls_true) == 9fig, axes = plt.subplots(3,3) for i, ax in enumerate(axes.flat):if smooth:interpolation = 'spline16'else:interpolation = 'nearest'ax.imshow(images[i, :, :, :], interpolation=interpolation)cls_true_name = class_names[cls_true[i]]if cls_pred is None:xlabel = "True:{0}".format(cls_true_name)else:cls_pred_name = class_names[cls_pred[i]]xlabel = "True:{0}, Pred:{1}".format(cls_true_name, cls_pred_name)ax.set_xlabel(xlabel)ax.set_xticks([])ax.set_yticks([])plt.show() |
2、定義placeholder
| 123 | X = tf.placeholder(tf.float32, shape=[None, img_size, img_size, num_channels], name="X")y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name="y")y_true_cls = tf.argmax(y_true, axis=1) |
3、圖片處理
-
單張圖片處理
- 原圖是32*32像素的,裁剪成24*24像素的
- 如果是訓練集進行一些裁剪,翻轉,飽和度等處理
- 如果是測試集,只進行簡單的裁剪處理
- 這也是為什么使用variable_scope定義兩個網絡
123456789101112131415 '''單個圖片預處理, 測試集只需要裁剪就行了'''def pre_process_image(image, training):if training:image = tf.random_crop(image, size=[img_size_cropped, img_size_cropped, num_channels]) # 裁剪image = tf.image.random_flip_left_right(image) # 左右翻轉image = tf.image.random_hue(image, max_delta=0.05) # 色調調整image = tf.image.random_brightness(image, max_delta=0.2) # 曝光image = tf.image.random_saturation(image, lower=0.0, upper=2.0) # 飽和度'''上面的調整可能pixel值超過[0, 1], 所以約束一下''' image = tf.minimum(image, 1.0)image = tf.maximum(image, 0.0)else:image = tf.image.resize_image_with_crop_or_pad(image, target_height=img_size_cropped, target_width=img_size_cropped)return image
-
多張圖片處理
- 因為訓練和測試是都是使用batch的方式
- 調用上面處理單張圖片的函數
- tf.map_fn(fn, elems)函數,前面一般是lambda函數,后面是所有的數據
1234 '''調用上面的函數,處理多個圖片images'''def pre_process(images, training):images = tf.map_fn(lambda image: pre_process_image(image, training), images) # tf.map_fn()使用lambda函數return images
4、定義tensorflow計算圖
- 定義主網絡圖
- 使用prettytensor
- 分為training和test兩個階段
| 123456789101112131415161718 | '''定義主網絡函數'''def main_network(images, training):x_pretty = pt.wrap(images)if training:phase = pt.Phase.trainelse:phase = pt.Phase.inferwith pt.defaults_scope(activation_fn=tf.nn.relu, phase=phase):y_pred, loss = x_pretty.\conv2d(kernel=5, depth=64, name="layer_conv1", batch_normalize=True).\max_pool(kernel=2, stride=2).\conv2d(kernel=5, depth=64, name="layer_conv2").\max_pool(kernel=2, stride=2).\flatten().\fully_connected(size=256, name="layer_fc1").\fully_connected(size=128, name="layer_fc2").\softmax_classifier(num_classes, labels=y_true)return y_pred, loss |
-
創建所有網絡,包含預處理圖片和主網絡
- 需要使用variable_scope, 測試階段需要reuse訓練階段的參數
12345678 '''創建所有網絡, 包含預處理和主網絡,'''def create_network(training):# 使用variable_scope可以重復使用定義的變量,訓練時創建新的,測試時重復使用with tf.variable_scope("network", reuse=not training):images = Ximages = pre_process(images=images, training=training)y_pred, loss = main_network(images=images, training=training)return y_pred, loss
- 需要使用variable_scope, 測試階段需要reuse訓練階段的參數
-
創建訓練階段網絡
- 定義一個global_step記錄訓練的次數,下面會將其保存到checkpoint,trainable為False就不會訓練改變
123456 '''訓練階段網絡創建'''global_step = tf.Variable(initial_value=0, name="global_step",trainable=False) # trainable 在訓練階段不會改變_, loss = create_network(training=True)optimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(loss, global_step)
- 定義一個global_step記錄訓練的次數,下面會將其保存到checkpoint,trainable為False就不會訓練改變
-
定義測試階段網絡
- 同時定義準確率
| 12345 | '''測試階段網絡創建'''y_pred, _ = create_network(training=False)y_pred_cls = tf.argmax(y_pred, dimension=1)correct_prediction = tf.equal(y_pred_cls, y_true_cls)accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) |
5、獲取權重和每層的輸出值信息
- 獲取權重變量
| 123456 | def get_weights_variable(layer_name):with tf.variable_scope("network/" + layer_name, reuse=True):variable = tf.get_variable("weights")return variable weights_conv1 = get_weights_variable("layer_conv1")weights_conv2 = get_weights_variable("layer_conv2") |
- 獲取每層的輸出變量
| 123456 | def get_layer_output(layer_name):tensor_name = "network/" + layer_name + "/Relu:0"tensor = tf.get_default_graph().get_tensor_by_name(tensor_name)return tensoroutput_conv1 = get_layer_output("layer_conv1")output_conv2 = get_layer_output("layer_conv2") |
6、保存和加載計算圖參數
- 因為第一次不會加載,所以放到try中判斷
| 12345678910111213141516 | '''執行tensorflow graph'''session = tf.Session()save_dir = "checkpoints/"if not os.path.exists(save_dir):os.makedirs(save_dir)save_path = os.path.join(save_dir, 'cifat10_cnn')'''嘗試存儲最新的checkpoint, 可能會失敗,比如第一次運行checkpoint不存在等'''try:print("開始存儲最新的存儲...")last_chk_path = tf.train.latest_checkpoint(save_dir)saver.restore(session, save_path=last_chk_path)print("存儲點來自:", last_chk_path)except:print("存儲錯誤, 初始化變量")session.run(tf.global_variables_initializer()) |
7、訓練
- 獲取batch
| 12345678 | '''SGD'''train_batch_size = 64def random_batch():num_images = len(images_train)idx = np.random.choice(num_images, size=train_batch_size, replace=False)x_batch = images_train[idx, :, :, :]y_batch = labels_train[idx, :]return x_batch, y_batch |
- 訓練網絡
- 每1000次保存一下checkpoint
- 因為上面會restored已經保存訓練的網絡,同時也保存了訓練的次數,所以可以接著訓練
1234567891011121314151617 def optimize(num_iterations):start_time = time.time()for i in range(num_iterations):x_batch, y_batch = random_batch()feed_dict_train = {X: x_batch, y_true: y_batch}i_global, _ = session.run([global_step, optimizer], feed_dict=feed_dict_train)if (i_global%100==0) or (i == num_iterations-1):batch_acc = session.run(accuracy, feed_dict=feed_dict_train)msg = "global step: {0:>6}, training batch accuracy: {1:>6.1%}"print(msg.format(i_global, batch_acc))if(i_global%1000==0) or (i==num_iterations-1):saver.save(session, save_path=save_path,global_step=global_step)print("保存checkpoint")end_time = time.time()time_diff = end_time-start_timeprint("耗時:", str(timedelta(seconds=int(round(time_diff)))))
十三、Inception model (GoogleNet)
- 全部代碼
- 使用訓練好的inception model,因為模型很復雜,一般的電腦運行不起來的。
- 網絡結構
1、下載和加載inception model
- 因為是預訓練好的模型,所以無需我們定義結構了
-
導入包
- 這里?inception是別人實現好的下載的代碼
12345 import numpy as npimport tensorflow as tffrom matplotlib import pyplot as pltimport inception # 第三方類加載inception modelimport os
- 這里?inception是別人實現好的下載的代碼
-
下載和加載模型
123 '''下載和加載inception model'''inception.maybe_download()model = inception.Inception() -
預測和顯示圖片函數
| 123456 | '''預測和顯示圖片'''def classify(image_path):plt.imshow(plt.imread(image_path))plt.show()pred = model.classify(image_path=image_path)model.print_scores(pred=pred, k=10, only_first_name=True) |
- 顯示調整后的圖片
- 因為?inception model要求輸入圖片為?299*299?像素的,所以它會resize成這個大小然后作為輸入
| 123456 | '''顯示處理后圖片的樣式'''def plot_resized_image(image_path):resized_image = model.get_resized_image(image_path)plt.imshow(resized_image, interpolation='nearest')plt.show()plot_resized_image(image_path) |
十四、遷移學習 Transfer Learning
- 全部代碼
- 網絡結構還是使用上一節的inception model, 去掉最后的全連接層,然后重新構建全連接層進行訓練
- 因為inception model?是訓練好的,前面的卷積層用于捕捉特征, 而后面的全連接層可用于分類,所以我們訓練全連接層即可
- 因為要計算每張圖片的transfer values,所以使用一個cache緩存transfer-values,第一次計算完成后,后面重新運行直接讀取存儲的結果,這樣比較節省時間
- transfer values是inception model在Softmax層前一層的值
- cifar-10數據集, 我放在實驗室電腦上運行了幾個小時才得到transfer values,還是比較慢的
- 總之最后相當于訓練下面的神經網絡,對應的?transfer-values作為輸入
1、準備工作
-
導入包
1234567891011 import numpy as npimport tensorflow as tfimport prettytensor as ptfrom matplotlib import pyplot as pltimport timefrom datetime import timedeltaimport osimport inception # 第三方下載inception model的代碼from inception import transfer_values_cache # cacheimport cifar10 # 也是第三方的庫,下載cifar-10數據集from cifar10 import num_classes -
下載cifar-10數據集
| 1234567 | '''下載cifar-10數據集'''cifar10.maybe_download_and_extract()class_names = cifar10.load_class_names()print("所有類別是:",class_names)'''訓練和測試集'''images_train, cls_train, labels_train = cifar10.load_training_data()images_test, cls_test, labels_test = cifar10.load_test_data() |
- 下載和加載inception model
| 123 | '''下載inception model'''inception.maybe_download()model = inception.Inception() |
-
計算cifar-10訓練集和測試集在inception model上的transfer values
- 因為計算非常耗時,這里第一次運行存儲到本地,以后再運行直接讀取即可
- transfer values的shape是(dataset size, 2048),因為是softmax層的前一層
12345678910111213141516 '''訓練和測試的cache的路徑'''file_path_cache_train = os.path.join(cifar10.data_path, 'inception_cifar10_train.pkl')file_path_cache_test = os.path.join(cifar10.data_path, 'inception_cifar10_test.pkl')print('處理訓練集上的transfer-values.......... ')image_scaled = images_train * 255.0 # cifar-10的pixel是0-1的, shape=(50000, 32, 32, 3)transfer_values_train = transfer_values_cache(cache_path=file_path_cache_train,images=image_scaled, model=model) # shape=(50000, 2048)print('處理測試集上的transfer-values.......... ')images_scaled = images_test * 255.0transfer_values_test = transfer_values_cache(cache_path=file_path_cache_test,model=model,images=images_scaled)print("transfer_values_train: ",transfer_values_train.shape)print("transfer_values_test: ",transfer_values_test.shape)
-
可視化一張圖片對應的transfer values
| 1234567891011 | '''顯示transfer values'''def plot_transfer_values(i):print("輸入圖片:")plt.imshow(images_test[i], interpolation='nearest')plt.show()print('transfer values --> 此圖片在inception model上')img = transfer_values_test[i]img = img.reshape((32, 64))plt.imshow(img, interpolation='nearest', cmap='Reds')plt.show()plot_transfer_values(16) |
2、分析transfer values
(1) 使用PCA主成分分析
- 將數據降到2維,可視化,因為transfer values是已經捕捉到的特征,所以可視化應該是可以隱約看到不同類別的數據是有區別的
- 取3000個數據觀察(因為PCA也是比較耗時的)
| 12345678 | '''使用PCA分析transfer values'''from sklearn.decomposition import PCApca = PCA(n_components=2)transfer_values = transfer_values_train[0:3000] # 取3000個,大的話計算量太大cls = cls_train[0:3000]print(transfer_values.shape)transfer_values_reduced = pca.fit_transform(transfer_values)print(transfer_values_reduced.shape) |
- 可視化降維后的數據
| 12345678910 | ## 顯示降維后的transfer valuesdef plot_scatter(values, cls):from matplotlib import cm as cmcmap = cm.rainbow(np.linspace(0.0, 1.0, num_classes))colors = cmap[cls]x = values[:, 0]y = values[:, 1]plt.scatter(x, y, color=colors)plt.show()plot_scatter(transfer_values_reduced, cls) |
(2) 使用TSNE主成分分析
- 因為t-SNE運行非常慢,所以這里先用PCA將到50維
| 1234567 | from sklearn.manifold import TSNEpca = PCA(n_components=50)transfer_values_50d = pca.fit_transform(transfer_values)tsne = TSNE(n_components=2)transfer_values_reduced = tsne.fit_transform(transfer_values_50d)print("最終降維后:", transfer_values_reduced.shape)plot_scatter(transfer_values_reduced, cls) |
- 數據區分還是比較明顯的
3、創建我們自己的網絡
- 使用prettytensor創建一個全連接層,使用softmax作為分類
| 12345678910 | '''創建網絡'''transfer_len = model.transfer_len # 獲取transfer values的大小,這里是2048x = tf.placeholder(tf.float32, shape=[None, transfer_len], name="x")y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name="y")y_true_cls = tf.argmax(y_true, axis=1)x_pretty = pt.wrap(x)with pt.defaults_scope(activation_fn=tf.nn.relu):y_pred, loss = x_pretty.\fully_connected(1024, name="layer_fc1").\softmax_classifier(num_classes, labels=y_true) |
- 優化器
| 123 | '''優化器'''global_step = tf.Variable(initial_value=0, name="global_step", trainable=False)optimizer = tf.train.AdamOptimizer(0.0001).minimize(loss, global_step) |
- 準確度
| 1234 | '''accuracy'''y_pred_cls = tf.argmax(y_pred, axis=1)correct_prediction = tf.equal(y_pred_cls, y_true_cls)accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) |
- SGD訓練
| 1234567891011121314151617181920212223242526 | '''SGD 訓練'''session = tf.Session()session.run(tf.initialize_all_variables())train_batch_size = 64def random_batch():num_images = len(images_train)idx = np.random.choice(num_images, size=train_batch_size,replace=False)x_batch = transfer_values_train[idx]y_batch = labels_train[idx]return x_batch, y_batchdef optimize(num_iterations):start_time = time.time()for i in range(num_iterations):x_batch, y_true_batch = random_batch()feed_dict_train = {x: x_batch,y_true: y_true_batch}i_global, _ = session.run([global_step, optimizer], feed_dict=feed_dict_train)if (i_global % 100 == 0) or (i==num_iterations-1):batch_acc = session.run(accuracy, feed_dict=feed_dict_train)msg = "Global Step: {0:>6}, Training Batch Accuracy: {1:>6.1%}"print(msg.format(i_global, batch_acc)) end_time = time.time()time_diff = end_time - start_timeprint("耗時:", str(timedelta(seconds=int(round(time_diff))))) |
- 使用batch size預測測試集數據
| 1234567891011121314 | '''batch 預測'''batch_size = 256def predict_cls(transfer_values, labels, cls_true):num_images = len(images_test)cls_pred = np.zeros(shape=num_images, dtype=np.int)i = 0while i < num_images:j = min(i + batch_size, num_images)feed_dict = {x: transfer_values[i:j],y_true: labels[i:j]}cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)i = jcorrect = (cls_true == cls_pred)return correct, cls_pred |
原文地址:?http://lawlite.me/2016/12/08/Tensorflow%E5%AD%A6%E4%B9%A0/#more
總結
以上是生活随笔為你收集整理的Tensorflow学习的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: (Unfinished)RNN-循环神经
- 下一篇: Keras学习