生活随笔
收集整理的這篇文章主要介紹了
机器学习-神经网络
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
神經(jīng)網(wǎng)絡(luò)
文章目錄 神經(jīng)網(wǎng)絡(luò) 簡(jiǎn)介 模型表示 模型訓(xùn)練 模型應(yīng)用 補(bǔ)充說明
簡(jiǎn)介
神經(jīng)網(wǎng)絡(luò)是一個(gè)相當(dāng)古老的算法,在支持向量機(jī)的統(tǒng)治時(shí)期,其效果備受質(zhì)疑,但是,隨著機(jī)器學(xué)習(xí)的發(fā)展以及深度網(wǎng)絡(luò)的產(chǎn)生,深度神經(jīng)網(wǎng)絡(luò)已經(jīng)成為很多問題的首選模型(sota方法)。如果說之前的很多問題可以通過線性回歸或者邏輯回歸這類模型通過構(gòu)建多項(xiàng)式特征達(dá)到非線性擬合效果,那么對(duì)于圖片(每個(gè)像素點(diǎn)就是一個(gè)特征)這樣的數(shù)據(jù),邏輯回歸等模型構(gòu)建的特征量是非常龐大的,而神經(jīng)網(wǎng)絡(luò)則采用模擬人腦這一最為強(qiáng)大的學(xué)習(xí)器的方法建立層層非線性變換的模型,它可以自動(dòng)學(xué)習(xí)任意的特征項(xiàng)的組合。
模型表示
神經(jīng)網(wǎng)絡(luò)模型的建立依據(jù)動(dòng)物的神經(jīng)元反應(yīng)機(jī)制,每個(gè)神經(jīng)元將所有的輸入整合并通過激活函數(shù)(非線性函數(shù),如S型函數(shù))激活后輸出,多個(gè)神經(jīng)元構(gòu)成一個(gè)隱藏層(hidden layer),層與層之間的連接權(quán)重就是機(jī)器學(xué)習(xí)模型中常說的參數(shù),每層神經(jīng)元之外也可以增加一個(gè)節(jié)點(diǎn),通常叫做偏置節(jié)點(diǎn)。通常數(shù)據(jù)特征輸入也表示為一層,稱為輸入層,輸出結(jié)果層(單個(gè)或者多個(gè)神經(jīng)元)稱為輸出層,中間進(jìn)行非線性變換的都稱為隱藏層。層間連線上的值就是需要學(xué)習(xí)的權(quán)重。
通過上述描述,可以看到每個(gè)神經(jīng)元的運(yùn)算就是一個(gè)比較簡(jiǎn)單的邏輯回歸運(yùn)算,不難推導(dǎo),按照矩陣運(yùn)算,下一層神經(jīng)元的輸入為上一層神經(jīng)元經(jīng)過權(quán)重變換得到的結(jié)果激活后的結(jié)果。 Y=sigmoid(WX+bias)Y = sigmoid(WX + bias) Y = s i g m o i d ( W X + b i a s ) 上述的運(yùn)算層層進(jìn)行的過程稱為前向傳播,理論上只要堆疊的層數(shù)足夠深,神經(jīng)網(wǎng)絡(luò)可以擬合任意分布的數(shù)據(jù),然而,過深的網(wǎng)絡(luò),訓(xùn)練的難度也會(huì)增加。
神經(jīng)網(wǎng)絡(luò)可以解決分類和回歸這兩種基本有監(jiān)督學(xué)習(xí)的問題,當(dāng)然,ANN(人工神經(jīng)網(wǎng)絡(luò))也可以用于處理其他的很多機(jī)器學(xué)習(xí)的基本問題。分類好一般分為二分類和多分類,二分類輸出層只需要一個(gè)神經(jīng)元表示標(biāo)簽為1的概率即可,多分類則需要多個(gè)神經(jīng)元表示每個(gè)類別的概率。對(duì)于這樣的神經(jīng)網(wǎng)絡(luò),其損失函數(shù)的一般化形式如下。
J(Θ)=?1m∑i=1m∑k=1K[yk(i)log?((hΘ(x(i)))k)+(1?yk(i))log?(1?(hΘ(x(i)))k)]+λ2m∑l=1L?1∑i=1sl∑j=1sl1(Θj,i(l))2J(\Theta)=-\frac{1}{m} \sum_{i=1}^{m} \sum_{k=1}^{K}\left[y_{k}^{(i)} \log \left(\left(h_{\Theta}\left(x^{(i)}\right)\right)_{k}\right)+\left(1-y_{k}^{(i)}\right) \log \left(1-\left(h_{\Theta}\left(x^{(i)}\right)\right)_{k}\right)\right]+\frac{\lambda}{2 m} \sum_{l=1}^{L-1} \sum_{i=1}^{s_{l}} \sum_{j=1}^{s l_{1}}\left(\Theta_{j, i}^{(l)}\right)^{2} J ( Θ ) = ? m 1 ? i = 1 ∑ m ? k = 1 ∑ K ? [ y k ( i ) ? log ( ( h Θ ? ( x ( i ) ) ) k ? ) + ( 1 ? y k ( i ) ? ) log ( 1 ? ( h Θ ? ( x ( i ) ) ) k ? ) ] + 2 m λ ? l = 1 ∑ L ? 1 ? i = 1 ∑ s l ? ? j = 1 ∑ s l 1 ? ? ( Θ j , i ( l ) ? ) 2
模型訓(xùn)練
上面說明了神經(jīng)網(wǎng)絡(luò)中常用的代價(jià)函數(shù)(交叉熵代價(jià)函數(shù)),訓(xùn)練神經(jīng)網(wǎng)絡(luò)的目標(biāo)就是最小化代價(jià)函數(shù)。在神經(jīng)網(wǎng)絡(luò)的運(yùn)算中,利用權(quán)重矩陣一步步計(jì)算出輸出層的輸出的過程稱為前向傳播(forward),而想要通過梯度下降更新參數(shù)就需要求解出各個(gè)參數(shù)的梯度。而這個(gè)梯度的計(jì)算算法稱為反向傳播算法(Backpropagation algorithm),它的思想簡(jiǎn)單看來就是假設(shè)每個(gè)神經(jīng)元節(jié)點(diǎn)都存在一個(gè)誤差值,這個(gè)值是通過權(quán)重矩陣的逆運(yùn)算從下一層計(jì)算得到的。具體的就是,計(jì)算輸出層各神經(jīng)元的誤差,將誤差通過權(quán)重矩陣傳回上一層,一次類推,得到除了輸入層的各隱層神經(jīng)元誤差值。
模型應(yīng)用
神經(jīng)網(wǎng)絡(luò)這種模型既可以用于監(jiān)督學(xué)習(xí)又可以用于非監(jiān)督模型,只是調(diào)整損失函數(shù)而已。之前,對(duì)于多分類問題,主要使用one vs all策略,即構(gòu)建多個(gè)分類器,而在神經(jīng)網(wǎng)絡(luò)中則可以將最后一層的輸出設(shè)置為類別個(gè)數(shù),這樣每個(gè)輸出層節(jié)點(diǎn)的值就是相應(yīng)類別的得分(score)或者概率(softmax)激活,將得分最高的作為最終預(yù)測(cè)類別,計(jì)算損失,依據(jù)反向傳播算法更新權(quán)重。 下面,構(gòu)建一個(gè)只包含一個(gè)隱藏層的神經(jīng)網(wǎng)絡(luò)用于手寫體分類。數(shù)據(jù)集采用一個(gè)手寫數(shù)字?jǐn)?shù)據(jù)集。
"""
Author: Zhou Chen
Date: 2019/11/13
Desc: About
"""
import numpy
as np
import scipy
. io
as scio
from sklearn
. preprocessing
import LabelBinarizer
import matplotlib
. pyplot
as plt
plt
. style
. use
( 'fivethirtyeight' ) def sigmoid ( x
) : return 1 / ( 1 + np
. exp
( - x
) ) def sigmoid_grad ( x
) : return x
* ( 1 - x
) def one_hot ( x
) : array
= np
. zeros
( shape
= [ 10 , 1 ] ) array
[ x
- 1 , 0 ] = 1 return array
def plot_his ( his
) : """繪制訓(xùn)練過程:param his::return:""" plt
. plot
( np
. arange
( len ( his
[ 'loss' ] ) ) , his
[ 'loss' ] , label
= 'loss' ) plt
. plot
( np
. arange
( len ( his
[ 'accuracy' ] ) ) , his
[ 'accuracy' ] , label
= 'accuracy' ) plt
. title
( 'training history' ) plt
. legend
( loc
= 0 ) plt
. show
( ) def mse ( y_label
, y_pred
) : y_pred
= np
. squeeze
( y_pred
, axis
= - 1 ) if y_label
. shape
== y_pred
. shape
: return np
. sum ( ( y
- y_pred
) ** 2 / y
. shape
[ 0 ] ) else : print ( "no match shape" ) return None class BPNet ( object ) : def __init__ ( self
) : """構(gòu)建單隱層神經(jīng)網(wǎng)絡(luò)""" self
. weights
= None self
. bias
= None self
. history
= { 'loss' : [ ] , 'accuracy' : [ ] } def train ( self
, x
, y
, trained_weights
= None , learning_rate
= 1e - 3 , epochs
= 100 ) : if trained_weights
: self
. weights
= [ trained_weights
[ 0 ] [ : , 1 : ] , trained_weights
[ 1 ] [ : , 1 : ] ] self
. bias
= [ trained_weights
[ 0 ] [ : , 0 ] , trained_weights
[ 1 ] [ : , 0 ] ] else : print ( "init weights" ) self
. weights
= [ np
. random
. normal
( size
= [ 25 , 400 ] ) , np
. random
. normal
( size
= [ 10 , 25 ] ) ] self
. bias
= [ np
. random
. normal
( size
= [ 25 , 1 ] ) , np
. random
. normal
( size
= [ 10 , 1 ] ) ] for epoch
in range ( epochs
) : for i
in range ( x
. shape
[ 0 ] ) : img
= x
[ i
] . reshape
( - 1 , 1 ) label
= y
[ i
] . reshape
( - 1 , 1 ) input_hidden
= self
. weights
[ 0 ] @ img
+ self
. bias
[ 0 ] . reshape
( - 1 , 1 ) output_hidden
= sigmoid
( input_hidden
) input_output
= self
. weights
[ 1 ] @ output_hidden
+ self
. bias
[ 1 ] . reshape
( - 1 , 1 ) output_output
= sigmoid
( input_output
) output_error
= sigmoid_grad
( output_output
) * ( label
- output_output
) hidden_error
= sigmoid_grad
( output_hidden
) * ( self
. weights
[ 1 ] . T @ output_error
) self
. weights
[ 1 ] += ( output_error @ output_hidden
. T
) * learning_rateself
. bias
[ 1 ] += output_error
* learning_rateself
. weights
[ 0 ] += ( hidden_error @ img
. T
) * learning_rateself
. bias
[ 0 ] += hidden_error
* learning_ratepred_epoch
= np
. argmax
( np
. squeeze
( self
. predict
( x
) , axis
= - 1 ) , axis
= 1 ) y_true
= np
. argmax
( y
, axis
= 1 ) acc
= np
. sum ( pred_epoch
. reshape
( - 1 , 1 ) == y_true
. reshape
( - 1 , 1 ) ) / y
. shape
[ 0 ] loss
= mse
( y
, self
. predict
( x
) ) self
. history
[ 'loss' ] . append
( loss
) self
. history
[ 'accuracy' ] . append
( acc
) print ( "epoch {}, loss {}, accuracy {}" . format ( epoch
, loss
, acc
) ) if epoch
> 10 and abs ( self
. history
[ 'loss' ] [ - 1 ] - self
. history
[ 'loss' ] [ - 2 ] ) < 1e - 5 : break return self
. history
def predict ( self
, x
, trained_weights
= None ) : if trained_weights
: self
. weights
= [ trained_weights
[ 0 ] [ : , 1 : ] , trained_weights
[ 1 ] [ : , 1 : ] ] self
. bias
= [ trained_weights
[ 0 ] [ : , 0 ] , trained_weights
[ 1 ] [ : , 0 ] ] if self
. weights
is None : print ( "no weights, cannot predict" ) result
= [ ] for i
in range ( x
. shape
[ 0 ] ) : img
= x
[ i
] . reshape
( - 1 , 1 ) input_hidden
= self
. weights
[ 0 ] @ img
+ self
. bias
[ 0 ] . reshape
( - 1 , 1 ) output_hidden
= sigmoid
( input_hidden
) input_output
= self
. weights
[ 1 ] @ output_hidden
+ self
. bias
[ 1 ] . reshape
( - 1 , 1 ) output_output
= sigmoid
( input_output
) result
. append
( output_output
) return np
. array
( result
) if __name__
== '__main__' : data
= scio
. loadmat
( '../data/ex3data1.mat' ) pretrained_weights
= scio
. loadmat
( '../data/ex3weights.mat' ) X
, y
= data
[ 'X' ] , data
[ 'y' ] y
= LabelBinarizer
( ) . fit_transform
( y
) w_hidden
, w_output
= pretrained_weights
[ 'Theta1' ] , pretrained_weights
[ 'Theta2' ] net
= BPNet
( ) pred_result
= net
. predict
( X
, [ w_hidden
, w_output
] ) pred_result
= np
. argmax
( np
. squeeze
( pred_result
, axis
= - 1 ) , axis
= 1 ) y_true
= np
. argmax
( y
, axis
= 1 ) print ( "載入?yún)?shù)前向傳播準(zhǔn)確率" , np
. sum ( pred_result
. reshape
( - 1 , 1 ) == y_true
. reshape
( - 1 , 1 ) ) / y
. shape
[ 0 ] ) his
= net
. train
( X
, y
, learning_rate
= 1e - 1 , epochs
= 200 ) plot_his
( his
)
可視化訓(xùn)練過程,結(jié)果如下。
補(bǔ)充說明
本文簡(jiǎn)單敘述了神經(jīng)網(wǎng)絡(luò)的簡(jiǎn)單思想并進(jìn)行簡(jiǎn)單的單隱層網(wǎng)絡(luò)實(shí)驗(yàn),思路參照吳恩達(dá)的機(jī)器學(xué)習(xí)課程(Coursera)。 本系列相關(guān)的博文和代碼開放于Github,歡迎訪問項(xiàng)目。同時(shí)博客也同步在我的個(gè)人博客網(wǎng)站,歡迎訪問查看其他文章。 由于能力有限,如有錯(cuò)誤,歡迎評(píng)論指正。
總結(jié)
以上是生活随笔 為你收集整理的机器学习-神经网络 的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔 網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔 推薦給好友。