convE模型
文章目錄
- 《convolutional 2D knowledge graph embedding》論文解讀
- 研究問題
- 寫作動(dòng)機(jī)(motivation)
- 模型詳細(xì)描述
- 模型的整體框架圖
- 實(shí)驗(yàn)
- 數(shù)據(jù)集
- 實(shí)驗(yàn)結(jié)果
- Reference
《convolutional 2D knowledge graph embedding》論文解讀
研究問題
現(xiàn)有知識(shí)圖譜均存在屬性、實(shí)體、關(guān)系的缺失,現(xiàn)實(shí)世界中知識(shí)圖譜的用途較為多樣,并涉及到問答、推薦等多種領(lǐng)域,為此對(duì)知識(shí)圖譜的補(bǔ)全進(jìn)行研究顯得尤為重要。
寫作動(dòng)機(jī)(motivation)
本文從神經(jīng)網(wǎng)絡(luò)進(jìn)行知識(shí)圖譜為出發(fā)點(diǎn),考慮到淺層的連接預(yù)測任務(wù)常常用來做鏈接預(yù)測任務(wù),但淺層的鏈接預(yù)測模型缺乏提取深層次特征的能力,盲目增加embedding_size又會(huì)導(dǎo)致過擬合現(xiàn)象,本文基于次設(shè)計(jì)參數(shù)高效、計(jì)算快速的卷積神經(jīng)網(wǎng)絡(luò)來做知識(shí)圖譜表示學(xué)習(xí)。
模型詳細(xì)描述
模型表示為
ψ ( e s , e o ) = f ( v e c ( f ( [ e s ^ ; r r ^ ] ) ) ? ω ) e o \psi(e_s,e_o)=f(vec(f([\hat{e_s};\hat{r_r}]))*\omega)e_o ψ(es?,eo?)=f(vec(f([es?^?;rr?^?]))?ω)eo?
其中 e s ^ \hat{e_s} es?^?和 r r ^ \hat{r_r} rr?^?分別表示頭實(shí)體與關(guān)系的向量表示,*表示卷積操作,f是一個(gè)非線性的函數(shù),采用relu函數(shù)做激活,score fuction定義為 p = σ ( ω r ( e s , e o ) ) p=\sigma(\omega_r(e_s,e_o)) p=σ(ωr?(es?,eo?))
損失函數(shù)使用二元的交叉熵
L ( p , t ) = ? 1 N ∑ ( t i ? l o g ( p i ) + ( 1 ? t i ) ? l o g ( 1 ? p i ) ) L(p,t)=-\frac{1}{N}\sum{(t_i \cdot log(p_i)+(1-t_i)\cdot log(1-p_i))} L(p,t)=?N1?∑(ti??log(pi?)+(1?ti?)?log(1?pi?))
convE模型源代碼
class ConvE(torch.nn.Module):def __init__(self, args, num_entities, num_relations):super(ConvE, self).__init__()self.emb_e = torch.nn.Embedding(num_entities, args.embedding_dim, padding_idx=0)self.emb_rel = torch.nn.Embedding(num_relations, args.embedding_dim, padding_idx=0)self.inp_drop = torch.nn.Dropout(args.input_drop)self.hidden_drop = torch.nn.Dropout(args.hidden_drop)self.feature_map_drop = torch.nn.Dropout2d(args.feat_drop)self.loss = torch.nn.BCELoss()self.emb_dim1 = args.embedding_shape1self.emb_dim2 = args.embedding_dim // self.emb_dim1self.conv1 = torch.nn.Conv2d(1, 32, (3, 3), 1, 0, bias=args.use_bias)self.bn0 = torch.nn.BatchNorm2d(1)self.bn1 = torch.nn.BatchNorm2d(32)self.bn2 = torch.nn.BatchNorm1d(args.embedding_dim)self.register_parameter('b', Parameter(torch.zeros(num_entities)))self.fc = torch.nn.Linear(args.hidden_size,args.embedding_dim)print(num_entities, num_relations)def init(self):xavier_normal_(self.emb_e.weight.data)xavier_normal_(self.emb_rel.weight.data)def forward(self, e1, rel):e1_embedded= self.emb_e(e1).view(-1, 1, self.emb_dim1, self.emb_dim2)rel_embedded = self.emb_rel(rel).view(-1, 1, self.emb_dim1, self.emb_dim2)stacked_inputs = torch.cat([e1_embedded, rel_embedded], 2)stacked_inputs = self.bn0(stacked_inputs)x= self.inp_drop(stacked_inputs)x= self.conv1(x)x= self.bn1(x)x= F.relu(x)x = self.feature_map_drop(x)x = x.view(x.shape[0], -1)x = self.fc(x)x = self.hidden_drop(x)x = self.bn2(x)x = F.relu(x)x = torch.mm(x, self.emb_e.weight.transpose(1,0))x += self.b.expand_as(x)pred = torch.sigmoid(x)returnconvE模型首先將實(shí)體和關(guān)系向量表示轉(zhuǎn)化為2維,并對(duì)其進(jìn)行cat操作,并對(duì)編碼后的數(shù)據(jù)進(jìn)行dropout操作,之后進(jìn)行卷積、batchnormalize等,之后經(jīng)過一個(gè)全聯(lián)接層并使用softmax得到對(duì)應(yīng)的概率。
模型的整體框架圖
實(shí)驗(yàn)
數(shù)據(jù)集
為驗(yàn)證模型的有效性,作者使用WN18,FB15K,YAGO3-10,Countires等公開數(shù)據(jù)集進(jìn)行對(duì)比
實(shí)驗(yàn)結(jié)果
參數(shù)結(jié)果對(duì)比,可以看出相同參數(shù)下,模型準(zhǔn)確率較高
Reference
http://nysdy.com/post/Convolutional%202D%20Knowledge%20Graph%20Embeddings/
源代碼地址
總結(jié)
- 上一篇: 简单实现一个虚拟形象系统
- 下一篇: Win10底部任务栏卡顿无响应的解决方法