日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

LSTM原理解读汇总

發布時間:2023/12/31 编程问答 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 LSTM原理解读汇总 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

[1]提到,所謂的"門"就是那個乘法單元

[2]總的圖如下:

#-------------------------------------------------------------------------------
那么神經網絡中,LSTM上方到底接什么呢?
根據[10]可以是:

可以接Mean pooling
也可以根據<Python深度學習>
直接在上方接個Dense(1)完事:
model.add(layers.Dense(1))
#-------------------------------------------------------------------------------

圖上的標示解釋如下:

標記含義
乘以一個因子
兩路進行計算
其中一路的數據
兩路做點積運算+偏置
把相同的結果傳給兩路

#-------------------------------------------------------------------------------
什么是門?
根據[3]定義:
Gates are a way to optionally let information through. They are composed out of a sigmoid neural net layer and a pointwise multiplication operation.

常說的三個門如下(紅色圈圈內部):

#-------------------------------------------------------------------------------
上圖以及本文的公式其實來自[3]
LSTM是復雜版本的RNN
內部對應的公式如下:

ft=σ(Wf?[ht?1,xt]+bf)f_t=\sigma(W_f·[h_{t-1},x_t]+b_f)ft?=σ(Wf??[ht?1?,xt?]+bf?)(遺忘門)
it=σ(Wi?[ht?1,xt]+bi)i_t=\sigma(W_i·[h_{t-1},x_t]+b_i)it?=σ(Wi??[ht?1?,xt?]+bi?)(輸入門/寫門)
Ct~=tanh?(WC?[ht?1,xt]+bC)\widetilde{C_t}=\tanh(W_C·[h_{t-1},x_t]+b_C)Ct??=tanh(WC??[ht?1?,xt?]+bC?)
Ct=ft?Ct?1+it?Ct~C_t=f_t*C_{t-1}+i_t*\widetilde{C_t}Ct?=ft??Ct?1?+it??Ct??
ot=σ(Wo[ht?1,xt]+bo)o_t=\sigma(W_o[h_{t-1},x_t]+b_o)ot?=σ(Wo?[ht?1?,xt?]+bo?)(輸出門/讀門)
ht=ot?tanh(Ct)h_t=o_t*tanh(C_t)ht?=ot??tanh(Ct?)
在英文文獻中,CtC_tCt?叫做carry track
把這些式子一個個對上面的圖中對應就行了
上面的對門的定義來自[5]
其中σ\sigmaσ是sigmoid函數。
門的本質是什么?
[8]本質其實是單層神經網絡,所以LSTM被訓練的時候,門也是被訓練對象,所以上圖中其實有四個被訓練的神經網絡。訓練LSTM其實訓練的是西面的紅色部分(WfW_fWf?,WiW_iWi?,WCW_CWC?,WoW_oWo?)

根據最新研究[9]顯示:
除了遺忘門,其他兩個門可以刪除,效果更好。
所以別尷尬地去理解門有啥用了。
#-------------------------------------------------------------------------------
什么是state,什么是output?在圖中的哪里?
根據[6]:
state是CtC_tCt?
output是hth_tht?
#-------------------------------------------------------------------------------
用到的tanh和sigmoid函數的區別

激活函數表達式區間函數曲線
tanh(x)ex?e?xex+e?x\frac{e^x-e^{-x}}{e^x+e^{-x}}ex+e?xex?e?x?(-1,1)
sigmoid(x)11+e?x\frac{1}{1+e^{-x}}1+e?x1?(0,1)

#-------------------------------------------------------------------------------
何時決定遺忘,何時決定保留信息?

#-------------------------------------------------------------------------------
LSTM源碼解讀,來自[11]:

RNN 關鍵代碼: @tf_export("nn.rnn_cell.BasicRNNCell") class BasicRNNCell(LayerRNNCell):"""The most basic RNN cell.Args:num_units: int, The number of units in the RNN cell.activation: Nonlinearity to use. Default: `tanh`.reuse: (optional) Python boolean describing whether to reuse variablesin an existing scope. If not `True`, and the existing scope already hasthe given variables, an error is raised.name: String, the name of the layer. Layers with the same name willshare weights, but to avoid mistakes we require reuse=True in suchcases.dtype: Default dtype of the layer (default of `None` means use the typeof the first input). Required when `build` is called before `call`."""def __init__(self,num_units,activation=None,reuse=None,name=None,dtype=None):super(BasicRNNCell, self).__init__(_reuse=reuse, name=name, dtype=dtype)# Inputs must be 2-dimensional.self.input_spec = base_layer.InputSpec(ndim=2)self._num_units = num_unitsself._activation = activation or math_ops.tanh@propertydef state_size(self):return self._num_units@propertydef output_size(self):return self._num_unitsdef build(self, inputs_shape):if inputs_shape[1].value is None:raise ValueError("Expected inputs.shape[-1] to be known, saw shape: %s"% inputs_shape)input_depth = inputs_shape[1].value# 初始化生成 W 和 B,shape 大小為 # W: [input_size + Hidden_size, Hidden_size)# B: [Hidden_size]self._kernel = self.add_variable(_WEIGHTS_VARIABLE_NAME,shape=[input_depth + self._num_units, self._num_units])self._bias = self.add_variable(_BIAS_VARIABLE_NAME,shape=[self._num_units],initializer=init_ops.zeros_initializer(dtype=self.dtype))self.built = True# 循環該函數 num_step(句子長度) 次,則該層計算完;def call(self, inputs, state):"""Most basic RNN: output = new_state = act(W * input + U * state + B)."""# output = Ht = tanh([x,Ht-1]*W + B)# 如果是第 0 時刻,那么當前的 state(即上一時刻的輸出H0)的值全部為0;# input 的 shape為: [batch_size,emb_size]# state 的 shape為:[batch_zize,Hidden_size]# matmul : 矩陣相乘# array_ops.concat: 兩個矩陣連接,連接后的 shape 為 [batch_size,input_size + Hidden_size],實際就是[Xt,Ht-1]# 此時計算: [input,state] * [W,U] == [Xt,Ht-1] * W,得到的shape為:[batch_size,Hidden_size]gate_inputs = math_ops.matmul(array_ops.concat([inputs, state], 1), self._kernel)# B 的shape 為:【Hidden_size】,[Xt,Ht-1] * W 計算后的shape為:[batch_size,Hidden_size]# nn_ops.bias_add,這個函數的計算方法是,讓每個 batch 得到的值,都加上這個 B;# 這一步,加上B后:Ht = tanh([Xt,Ht-1] * W + B),得到的 shape 還是: [batch_size,Hidden_size]# 那么這個 Ht 將作為下一時刻的輸入和下一層的輸入;gate_inputs = nn_ops.bias_add(gate_inputs, self._bias)output = self._activation(gate_inputs)#此時return的維度為:[batch_size,Hidden_size]# 一個output作為下一時刻的輸入Ht,另一個作為下一層的輸入 Htreturn output, outputLSTM 關鍵代碼:@tf_export("nn.rnn_cell.BasicLSTMCell") class BasicLSTMCell(LayerRNNCell):"""Basic LSTM recurrent network cell.The implementation is based on: http://arxiv.org/abs/1409.2329.We add forget_bias (default: 1) to the biases of the forget gate in order toreduce the scale of forgetting in the beginning of the training.It does not allow cell clipping, a projection layer, and does notuse peep-hole connections: it is the basic baseline.For advanced models, please use the full @{tf.nn.rnn_cell.LSTMCell}that follows."""def __init__(self,num_units,forget_bias=1.0,state_is_tuple=True,activation=None,reuse=None,name=None,dtype=None):"""Initialize the basic LSTM cell.Args:num_units: int, The number of units in the LSTM cell.forget_bias: float, The bias added to forget gates (see above).Must set to `0.0` manually when restoring from CudnnLSTM-trainedcheckpoints.state_is_tuple: If True, accepted and returned states are 2-tuples ofthe `c_state` and `m_state`. If False, they are concatenatedalong the column axis. The latter behavior will soon be deprecated.activation: Activation function of the inner states. Default: `tanh`.reuse: (optional) Python boolean describing whether to reuse variablesin an existing scope. If not `True`, and the existing scope already hasthe given variables, an error is raised.name: String, the name of the layer. Layers with the same name willshare weights, but to avoid mistakes we require reuse=True in suchcases.dtype: Default dtype of the layer (default of `None` means use the typeof the first input). Required when `build` is called before `call`.When restoring from CudnnLSTM-trained checkpoints, must use`CudnnCompatibleLSTMCell` instead."""super(BasicLSTMCell, self).__init__(_reuse=reuse, name=name, dtype=dtype)if not state_is_tuple:logging.warn("%s: Using a concatenated state is slower and will soon be ""deprecated. Use state_is_tuple=True.", self)# Inputs must be 2-dimensional.self.input_spec = base_layer.InputSpec(ndim=2)self._num_units = num_unitsself._forget_bias = forget_biasself._state_is_tuple = state_is_tupleself._activation = activation or math_ops.tanh@propertydef state_size(self):# 隱藏層的 size:return (LSTMStateTuple(self._num_units, self._num_units)if self._state_is_tuple else 2 * self._num_units)@propertydef output_size(self):# 輸出層的size:Hidden_sizereturn self._num_unitsdef build(self, inputs_shape):if inputs_shape[1].value is None:raise ValueError("Expected inputs.shape[-1] to be known, saw shape: %s"% inputs_shape)#inputs的維度為:[batch_size,input_size]#如果是第一層每個時刻詞語的輸入,則這個input_size 就是 embedding_size,就等于詞向量的維度;# 所以 此時 input_depth,就是input_sizeinput_depth = inputs_shape[1].value# h_depth 就是 Hidden_size,隱藏層的維度h_depth = self._num_units# self._kernel == W;則此時 W的維度 為【input_size + Hidden_size,4* Hidden_size】# 此處定義四個 W 和 B,是為了,一次就把 i,j,f,o 計算出來;相當于圖中的 ft,it,ct‘,otself._kernel = self.add_variable(_WEIGHTS_VARIABLE_NAME,shape=[input_depth + h_depth, 4 * self._num_units])# 此時的B的維度為【4 * Hidden_size】self._bias = self.add_variable(_BIAS_VARIABLE_NAME,shape=[4 * self._num_units],initializer=init_ops.zeros_initializer(dtype=self.dtype))self.built = Truedef call(self, inputs, state):"""Long short-term memory cell (LSTM).Args:inputs: `2-D` tensor with shape `[batch_size, input_size]`.state: An `LSTMStateTuple` of state tensors, each shaped`[batch_size, num_units]`, if `state_is_tuple` has been set to`True`. Otherwise, a `Tensor` shaped`[batch_size, 2 * num_units]`.Returns:A pair containing the new hidden state, and the new state (either a`LSTMStateTuple` or a concatenated state, depending on`state_is_tuple`)."""sigmoid = math_ops.sigmoidone = constant_op.constant(1, dtype=dtypes.int32)# Parameters of gates are concatenated into one multiply for efficiency.# 每一層的第0時刻的 c 和 h,元素全部初始化為0;if self._state_is_tuple:c, h = stateelse:c, h = array_ops.split(value=state, num_or_size_splits=2, axis=one)# 此時刻的 input:Xt 和 上一時刻的輸出:Ht-1,進行結合;# inputs shape : [batch_size,input_size],第一層的時候,input_size,就相當于 embedding_size# 結合后的維度為【batch_size,input_size + Hidden_size】,W的維度為【input_size + Hidden_size,4*hidden_size】# 兩者進行矩陣相乘后的維度為:【batch_size,4*hidden_size】gate_inputs = math_ops.matmul(array_ops.concat([inputs, h], 1), self._kernel)# B 的shape 為:【4 * Hidden_size】,[Xt,Ht-1] * W 計算后的shape為:[batch_size, 4 * Hidden_size]# nn_ops.bias_add,這個函數的計算方法是,讓每個 batch 得到的值,都加上這個 B;# 這一步,加上B后,得到的是,i,j,f,o 的結合, [Xt,Ht-1] * W + B,得到的 shape 還是: [batch_size, 4 * Hidden_size]# 加上偏置B后的維度為:【batch_size,4 * Hidden_size】gate_inputs = nn_ops.bias_add(gate_inputs, self._bias)# i = input_gate, j = new_input, f = forget_gate, o = output_gate# 從以上的矩陣相乘后,分割出來四部分,就是 i,j,f,o的值;# 每個的維度為【batch_size,Hidden_size】i, j, f, o = array_ops.split(value=gate_inputs, num_or_size_splits=4, axis=one)forget_bias_tensor = constant_op.constant(self._forget_bias, dtype=f.dtype)# Note that using `add` and `multiply` instead of `+` and `*` gives a# performance improvement. So using those at the cost of readability.add = math_ops.add# 此處加上遺忘的 bias,選擇遺忘元素;# 以下計算是:對應元素相乘:因為四個參數的維度都是【batch_size,hidden_size】,計算后維度不變;# new_c = c*sigmoid(f+bias) + sigmoid(i)*tanh(o)# 計算后的維度為【batch_size,hidden_size】multiply = math_ops.multiplynew_c = add(multiply(c, sigmoid(add(f, forget_bias_tensor))),multiply(sigmoid(i), self._activation(j)))# 以下計算是:對應元素相乘:因為2個參數的維度都是【batch_size,hidden_size】,計算后維度不變;#new_h = sigmoid(o) * tanh(new_c)new_h = multiply(self._activation(new_c), sigmoid(o))# 計算后的維度是(值不相等):new_c == new_h == 【batch_size,hidden_size】if self._state_is_tuple:new_state = LSTMStateTuple(new_c, new_h)else:new_state = array_ops.concat([new_c, new_h], 1)# new_h:最后一個時刻的H,new_state:最后一個時刻的 H和C;循環執行該函數,執行 num_step次(即 最大的步長),則該層計算完全;# 此時的 new_c 和 new_h,作為下一時刻的輸入,new_h 和下一時刻的,Xt+1 進行連接,連接后的維度為,【batch_size,input_size + Hidden_size】# 如果還有下一層的話,那么此刻的 new_h,變身為下一時刻的 Xtreturn new_h, new_state

#-------------------------------------------------------------------------------
LSTM的維度如何理解?來自[11]:

#-------------------------------------------------------------------------------
梯度問題可以參考[13]
#-------------------------------------------------------------------------------
當前最好的RNN單元單元是什么呢?
根據[12],是transformer
#-------------------------------------------------------------------------------

Reference:

[1]學界|神奇!只有遺忘門的LSTM性能優于標準LSTM
[2]LSTM入門總結
[3]Understanding LSTM Networks
[4]How is the LSTM RNN forget gate calculated?
[5]How the LSTM decides when to store long information, short information or reset information?
[6]What is the difference between states and outputs in LSTM?
[7]LSTM的參數問題?
[8]LSTM 如何決定「遺忘」?
[9]The unreasonable effectiveness of the forget gate
[10]LSTM源碼分析
[11]tensorflow 筆記8:RNN、Lstm源碼,訓練代碼輸入輸出,維度分析
[12]LSTM與GRU
[13]How LSTM networks solve the problem of vanishing gradients

總結

以上是生活随笔為你收集整理的LSTM原理解读汇总的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。