日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Paper Reading: Papers in Frontiers of NLP 2018 collection

發(fā)布時間:2025/4/5 编程问答 19 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Paper Reading: Papers in Frontiers of NLP 2018 collection 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

1.Papers collections
Note: the original name of the paper will be appended soonly!

IndexPaperYearBrief IntroNote
1.[Collobert & Weston, ICML ’08]2008Multi-task learning.MTL: Win Test-of-time-award at ICML 2018
2.[Pennington et al., EMNLP ’14; Levy et al., NIPS ’14]2014Generate embeeidng by matrix factorizationNew method of embedding
3.[Levy et al., TACL ’15]2015Classic methods (eg. PMI and SVD) for embedding generationNew method of embedding
4.[Le & Mikolov, ICML ’14; Kiros et al., NIPS ’15]2016Skip-gram for sentence representationSkip-gram
5.[Grover & Leskovec, KDD ’16]2016Skip-gram for Nueral Network modellingSkip-gram
6.[Luong et al., ’15]2015Difference embedding projection aids trasfer learningEmbedding projection
7.[Hochreiter & Schmidhuber, NeuComp ’97]1997The original paper for LSTMLSTM
8.[Kalchbrenner et al., ’17]2017Dilated CNNCNN: To enable wider receptive field
9.[Wang et al., ACL ’16]2016Stacked LSTM and CNNStacked model
10.[Bradbury et al., ICLR ’17]2017Use convolution to speed up LSTMCNN&LSTM combination
11.[Tai et al., ACL ’15]2015Extend Recursive nueral netword to LSTMRecursive neural network put forward
12.[Bastings et al., EMNLP ’17]2017graph convolutional neural networkCnn over graph(trees)
13.[Levy and Goldberg, ACL ’14]2014word embeddings generated form dependenciesEmbedding generation
14.[Wu et al., ’16]2016Deep LSTMNew seq2seq model
15.[Kalchbrenner et al., arXiv ’16; Gehring et al., arXiv ’17]2017Convolutional encodersNew seq2seq model
16.[Vaswani et al., NIPS ’17]2017Transformer: pure attention architectureNew seq2seq model
17.[Chen et al., ACL ’18]2018combination of LSTM and TransformerNew seq2seq model
18.[Vinyals et al., NIPS ’16]2016Attention in one-shot learningAttention & one-shot
19.0[Graves et al., arXiv ’14]2014Neural Turing MachineMemory Network
19.1[Weston et al., ICLR ’15]2015Memory NetworkMemory Network
19.2[Sukhbaatar et al., NIPS ’15]2015End-to-end Memory NetworksMemory Network
19.3Dynamic Memory Networks [Kumar et al., ICML ’16]2016Dynamic Memory NetworksMemory Network
19.4[Graves et al., Nature ’16]2016Neural Differentiable ComputerMemory Network
19.5[Henaff et al., ICLR ’17]2017Recurrent Entity NetworkMemory Network
20.[Peters et al., NAACL ’18],之前看過一篇稍后補上2018Language model embedding used as featureLanguage model
21.[Howard & Ruder, ACL ’18]2018Language model fine tuned on task dataLanguage model
22.[Jia & Liang, EMNLP ’17]2017Adversarial examplesAdversarial
23.[Miyato et al., ICLR ’17; Yasunaga et al., NAACL’18]2018Adversarial trainingForm of regularization
24.[Ganin et al., JMLR ’16; Kim et al., ACL ’17]2017Domain adversarial lossForm of regularization
25.[Semeniuta et al., ’18]2018GANs’ application in NLGGAN for NLP
26.[Paulus et al., ICLR ’18]2018RL for summarizationRL with ROUGE loss
27.[Ranzato et al., ICLR ’16]2016RL for Machine TranslationRL with BLUE loss
28.[Conneau et al., ICLR’18]2018word translation without parallel dataLow-resource scenarios

總結(jié)

以上是生活随笔為你收集整理的Paper Reading: Papers in Frontiers of NLP 2018 collection的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。