Reasoning with Sarcasm by Reading In-between
Reasoning with Sarcasm by Reading In-between
click here:文章下載
方法綜述:
本文提出了新的模型SIARN(Singal-dimensional Intra-Attention Recurrent Networks)和MIARN(Multi-dimensional Intra-Attention Recurrent Networks)。
先給出一個(gè)定義,關(guān)系得分si,js_{i,j}si,j?表示單詞wiw_iwi?、wjw_jwj?間的信息關(guān)聯(lián)程度。二者的區(qū)別僅在于,SIARN中只考慮單詞對(duì)間的一種內(nèi)在關(guān)系,si,js_{i,j}si,j?是個(gè)標(biāo)量;而MIARN考慮單詞對(duì)間的多種(k種)內(nèi)在關(guān)系,si,js_{i,j}si,j?是個(gè)k維向量,再將其融合為一個(gè)標(biāo)量。
模型中包含三個(gè)子模型:Singal/Multi-dimensional Intra-Attention、LSTM、Prediction Layer:
Singal/Multi-dimensional Intra-Attention:通過(guò)單詞對(duì)間的信息,得到句子的Intra-Attentive Representation
LSTM:通過(guò)句子的序列信息,得到句子的Compositional Representation
Prediction Layer: 融合兩種信息表示,進(jìn)行二分類預(yù)測(cè)
各模型算法:
Singal/Multi-dimensional Intra-Attention
Sigal-dimensional:
si,j=Wa([wi;wj])+ba?si,j∈Rs_{i,j}=W_a([w_i;w_j])+b_a \implies s_{i,j} \in Rsi,j?=Wa?([wi?;wj?])+ba??si,j?∈R 標(biāo)量
Wa∈R2n×1,ba∈R;W_a \in R^{2n \times 1},b_a \in R;Wa?∈R2n×1,ba?∈R;
Multi-dimensional:
si,j^=Wq([wi;wj])+bq?si,j^∈Rk\hat{s_{i,j}}=W_q([w_i;w_j])+b_q \implies \hat{s_{i,j}} \in R^ksi,j?^?=Wq?([wi?;wj?])+bq??si,j?^?∈Rk k維向量
Wq∈R2n×k,bq∈Rk;W_q \in R^{2n \times k},b_q \in R^k;Wq?∈R2n×k,bq?∈Rk;
si,j=Wp(ReLU(si,j^))+bps_{i,j}=W_p(ReLU(\hat{s_{i,j}}))+b_psi,j?=Wp?(ReLU(si,j?^?))+bp?
Wp∈Rk×1,bp∈R;W_p \in R^{k \times 1},b_p \in R;Wp?∈Rk×1,bp?∈R;
??????????\Downarrow \Downarrow \Downarrow \Downarrow \Downarrow \Downarrow \Downarrow \Downarrow \Downarrow \Downarrow??????????
si,j=Wp(ReLU(Wq([wi;wj])))+bps_{i,j}=W_p(ReLU(W_q([w_i;w_j])))+b_psi,j?=Wp?(ReLU(Wq?([wi?;wj?])))+bp?
Wq∈R2n×k,bq∈Rk,Wp∈Rk×1,bp∈R;W_q \in R^{2n \times k},b_q \in R^k,W_p \in R^{k \times 1},b_p \in R;Wq?∈R2n×k,bq?∈Rk,Wp?∈Rk×1,bp?∈R;
從而,對(duì)于長(zhǎng)度為lll的句子,可以得到對(duì)稱矩陣s∈Rl×ls \in R^{l \times l}s∈Rl×l。
對(duì)矩陣s進(jìn)行row-wise max-pooling,即按行取最大值,得到attention vector:a∈Rla \in R^la∈Rl
有了權(quán)重向量a,便可以對(duì)句子單詞進(jìn)行加權(quán)求和,得到Intra-Attentive Representation:va∈Rnv_a \in R^nva?∈Rn:
LSTM
LSTM的每個(gè)時(shí)間步輸出hi∈Rdh_i \in R^dhi?∈Rd,可以表示為:
hi=LSTM(w,i),?i∈[1,...,l]h_i=LSTM(w,i),\forall i \in [1,...,l]hi?=LSTM(w,i),?i∈[1,...,l]
本文使用LSTM的最后時(shí)間步輸出,作為Compositional Representation:vc∈Rdv_c \in R^dvc?∈Rd
vc=hlv_c=h_lvc?=hl?
ddd是LSTM隱藏層單元數(shù),lll是句子的最大長(zhǎng)度。
Prediction Layer
融合上述得到的Intra-Attentive Representation va∈Rnv_a \in R^nva?∈Rn、Compositional Representation vc∈Rdv_c \in R^dvc?∈Rd,得到融合表示向量 v∈Rdv \in R^dv∈Rd,再進(jìn)行二分類輸出y^∈R2\hat{y} \in R^2y^?∈R2:
v=ReLU(Wz([va;vc])+bz)v=ReLU(W_z([v_a;v_c]) + b_z)v=ReLU(Wz?([va?;vc?])+bz?)
y^=Softmax(Wfv+bf)\hat{y}=Softmax(W_fv+b_f)y^?=Softmax(Wf?v+bf?)
其中,Wz∈R(d+n)×d,bz∈Rd,Wf∈Rd×2,Wf∈Rd×2,bf∈R2W_z \in R^{(d+n) \times d},b_z \in R^d,W_f \in R^{d \times 2},W_f \in R^{d \times 2}, b_f \in R^2Wz?∈R(d+n)×d,bz?∈Rd,Wf?∈Rd×2,Wf?∈Rd×2,bf?∈R2
訓(xùn)練目標(biāo):
待學(xué)習(xí)參數(shù):θ={Wp,bp,Wq,bq,Wz,bz,Wf,bf}\theta = \{W_p,b_p,W_q,b_q,W_z,b_z,W_f,b_f\}θ={Wp?,bp?,Wq?,bq?,Wz?,bz?,Wf?,bf?}
超參數(shù):k,n,d,λk, n, d, \lambdak,n,d,λ
總結(jié)
以上是生活随笔為你收集整理的Reasoning with Sarcasm by Reading In-between的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 蔡氏电路混沌同步Multisim实现
- 下一篇: 有参构造函数和无参构造函数