Reasoning with Sarcasm by Reading In-between
Reasoning with Sarcasm by Reading In-between
click here:文章下載
方法綜述:
本文提出了新的模型SIARN(Singal-dimensional Intra-Attention Recurrent Networks)和MIARN(Multi-dimensional Intra-Attention Recurrent Networks)。
先給出一個定義,關系得分si,js_{i,j}si,j?表示單詞wiw_iwi?、wjw_jwj?間的信息關聯程度。二者的區別僅在于,SIARN中只考慮單詞對間的一種內在關系,si,js_{i,j}si,j?是個標量;而MIARN考慮單詞對間的多種(k種)內在關系,si,js_{i,j}si,j?是個k維向量,再將其融合為一個標量。
模型中包含三個子模型:Singal/Multi-dimensional Intra-Attention、LSTM、Prediction Layer:
Singal/Multi-dimensional Intra-Attention:通過單詞對間的信息,得到句子的Intra-Attentive Representation
LSTM:通過句子的序列信息,得到句子的Compositional Representation
Prediction Layer: 融合兩種信息表示,進行二分類預測
各模型算法:
Singal/Multi-dimensional Intra-Attention
Sigal-dimensional:
si,j=Wa([wi;wj])+ba?si,j∈Rs_{i,j}=W_a([w_i;w_j])+b_a \implies s_{i,j} \in Rsi,j?=Wa?([wi?;wj?])+ba??si,j?∈R 標量
Wa∈R2n×1,ba∈R;W_a \in R^{2n \times 1},b_a \in R;Wa?∈R2n×1,ba?∈R;
Multi-dimensional:
si,j^=Wq([wi;wj])+bq?si,j^∈Rk\hat{s_{i,j}}=W_q([w_i;w_j])+b_q \implies \hat{s_{i,j}} \in R^ksi,j?^?=Wq?([wi?;wj?])+bq??si,j?^?∈Rk k維向量
Wq∈R2n×k,bq∈Rk;W_q \in R^{2n \times k},b_q \in R^k;Wq?∈R2n×k,bq?∈Rk;
si,j=Wp(ReLU(si,j^))+bps_{i,j}=W_p(ReLU(\hat{s_{i,j}}))+b_psi,j?=Wp?(ReLU(si,j?^?))+bp?
Wp∈Rk×1,bp∈R;W_p \in R^{k \times 1},b_p \in R;Wp?∈Rk×1,bp?∈R;
??????????\Downarrow \Downarrow \Downarrow \Downarrow \Downarrow \Downarrow \Downarrow \Downarrow \Downarrow \Downarrow??????????
si,j=Wp(ReLU(Wq([wi;wj])))+bps_{i,j}=W_p(ReLU(W_q([w_i;w_j])))+b_psi,j?=Wp?(ReLU(Wq?([wi?;wj?])))+bp?
Wq∈R2n×k,bq∈Rk,Wp∈Rk×1,bp∈R;W_q \in R^{2n \times k},b_q \in R^k,W_p \in R^{k \times 1},b_p \in R;Wq?∈R2n×k,bq?∈Rk,Wp?∈Rk×1,bp?∈R;
從而,對于長度為lll的句子,可以得到對稱矩陣s∈Rl×ls \in R^{l \times l}s∈Rl×l。
對矩陣s進行row-wise max-pooling,即按行取最大值,得到attention vector:a∈Rla \in R^la∈Rl
有了權重向量a,便可以對句子單詞進行加權求和,得到Intra-Attentive Representation:va∈Rnv_a \in R^nva?∈Rn:
LSTM
LSTM的每個時間步輸出hi∈Rdh_i \in R^dhi?∈Rd,可以表示為:
hi=LSTM(w,i),?i∈[1,...,l]h_i=LSTM(w,i),\forall i \in [1,...,l]hi?=LSTM(w,i),?i∈[1,...,l]
本文使用LSTM的最后時間步輸出,作為Compositional Representation:vc∈Rdv_c \in R^dvc?∈Rd
vc=hlv_c=h_lvc?=hl?
ddd是LSTM隱藏層單元數,lll是句子的最大長度。
Prediction Layer
融合上述得到的Intra-Attentive Representation va∈Rnv_a \in R^nva?∈Rn、Compositional Representation vc∈Rdv_c \in R^dvc?∈Rd,得到融合表示向量 v∈Rdv \in R^dv∈Rd,再進行二分類輸出y^∈R2\hat{y} \in R^2y^?∈R2:
v=ReLU(Wz([va;vc])+bz)v=ReLU(W_z([v_a;v_c]) + b_z)v=ReLU(Wz?([va?;vc?])+bz?)
y^=Softmax(Wfv+bf)\hat{y}=Softmax(W_fv+b_f)y^?=Softmax(Wf?v+bf?)
其中,Wz∈R(d+n)×d,bz∈Rd,Wf∈Rd×2,Wf∈Rd×2,bf∈R2W_z \in R^{(d+n) \times d},b_z \in R^d,W_f \in R^{d \times 2},W_f \in R^{d \times 2}, b_f \in R^2Wz?∈R(d+n)×d,bz?∈Rd,Wf?∈Rd×2,Wf?∈Rd×2,bf?∈R2
訓練目標:
待學習參數:θ={Wp,bp,Wq,bq,Wz,bz,Wf,bf}\theta = \{W_p,b_p,W_q,b_q,W_z,b_z,W_f,b_f\}θ={Wp?,bp?,Wq?,bq?,Wz?,bz?,Wf?,bf?}
超參數:k,n,d,λk, n, d, \lambdak,n,d,λ
總結
以上是生活随笔為你收集整理的Reasoning with Sarcasm by Reading In-between的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 蔡氏电路混沌同步Multisim实现
- 下一篇: 有参构造函数和无参构造函数