日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪(fǎng)問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

【NLP】word2vec 模型

發(fā)布時(shí)間:2023/12/14 编程问答 26 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【NLP】word2vec 模型 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

參考:《深度學(xué)習(xí)從0到1-基于Tensorflow2》

【參考:深入淺出Word2Vec原理解析 - 知乎】

總結(jié)

word2vec的前生 NNLM(神經(jīng)網(wǎng)絡(luò)語(yǔ)言模型)

【參考:詞向量技術(shù)原理及應(yīng)用詳解(二) - 木屐呀 - 博客園】

word2vec有哪幾種實(shí)現(xiàn)方式?

共兩種:
1)用上下文預(yù)測(cè)中心詞cbow(continue bag of word)
2)利用中心詞預(yù)測(cè)上下文 skip-gram

從實(shí)現(xiàn)方式上看兩者只是輸入輸出發(fā)生了變化。

word2vec的本質(zhì)是什么?

當(dāng)然是無(wú)監(jiān)督學(xué)習(xí),因?yàn)檩敵霾](méi)有l(wèi)abel。但是從輸入的和輸出的形式上來(lái)看,輸入的是一對(duì)對(duì)單詞,看起來(lái)像是有監(jiān)督,其實(shí)并不是。

因?yàn)樵~向量的本質(zhì)可以看出是一個(gè)只有一層的神經(jīng)網(wǎng)絡(luò),因此必須有輸入,輸出。而訓(xùn)練過(guò)程或者說(shuō)目的不是得到預(yù)測(cè)結(jié)果單詞,或?qū)卧~進(jìn)行分類(lèi)。最為關(guān)鍵的是獲得hidden layer的權(quán)重。也就是說(shuō)借助了sequence2sequence模型訓(xùn)練過(guò)程,得到hidden layer的權(quán)重。

CBOW

連續(xù)詞袋模型 CBOW(Continuous Bag of-Words)

CBOW 模型是給神經(jīng)網(wǎng)絡(luò)傳入上下文詞匯,然后預(yù)測(cè)目標(biāo)詞匯

比如我們有一個(gè)用于訓(xùn)練的句子是“我愛(ài)北京天安門(mén)“,可以給模型傳入“愛(ài)”和“天安門(mén)“,然后用”北京“作為要預(yù)測(cè)的目標(biāo)詞匯。

而最簡(jiǎn)單的CBOW 模型就是傳入前一個(gè)詞然后再預(yù)測(cè)后一個(gè)詞。

Skip-Gram

Skip-Gram 模型是給神經(jīng)網(wǎng)絡(luò)傳入一個(gè)詞匯,然后預(yù)測(cè)其上下文詞匯

PyTorch實(shí)現(xiàn)(乞丐版)

【參考:nlp-tutorial/Word2Vec-Skipgram.py at master · graykode/nlp-tutorial】

【參考:Word2Vec的PyTorch實(shí)現(xiàn)_嗶哩嗶哩_bilibili】

【參考:Word2Vec的PyTorch實(shí)現(xiàn)(乞丐版) - mathor】


總結(jié):

構(gòu)建word2id 構(gòu)建數(shù)據(jù) - 窗口內(nèi)的單詞為【C-2,C-1,C,C+1,C+2- 數(shù)據(jù) [[C,C-2],[C,C-1],[C,C+1],[C,C+2]] - np.eye(voc_size) 用onehot表示單詞送入模型訓(xùn)練 import torch import numpy as np import torch.nn as nn import torch.optim as optim import matplotlib.pyplot as plt import torch.utils.data as Datadtype = torch.FloatTensor device = torch.device("cuda" if torch.cuda.is_available() else "cpu")sentences = ["jack like dog", "jack like cat", "jack like animal","dog cat animal", "banana apple cat dog like", "dog fish milk like","dog cat animal like", "jack like apple", "apple like", "jack like banana","apple banana jack movie book music like", "cat dog hate", "cat dog like"]word_sequence = " ".join(sentences).split() # ['jack', 'like', 'dog', 'jack', 'like', 'cat', 'animal',...] vocab = list(set(word_sequence)) # build words vocabulary word2idx = {w: i for i, w in enumerate(vocab)} # 假設(shè)為{'jack':0, 'like':1,'dog':2,...}# Word2Vec Parameters batch_size = 8 embedding_size = 2 # 2 dim vector represent one word C = 2 # window size 窗口內(nèi)的單詞為【C-2,C-1,C,C+1,C+2】 voc_size = len(vocab)# 中心詞 center # 背景詞 context 即上下文詞匯skip_grams = [] # 這里必須起始從第三個(gè)詞開(kāi)始,因?yàn)閣indow size為2 for idx in range(C, len(word_sequence) - C):# 舉例 idx=2 對(duì)應(yīng)word_sequence[idx]單詞為'dog',對(duì)應(yīng)的word2idx索引為2 前面兩個(gè)是'jack', 'like',后面兩個(gè)是'jack', 'like'center = word2idx[word_sequence[idx]] # center word# [0,1,3,4] 即前C個(gè)詞和后C個(gè)詞context_idx = list(range(idx - C, idx)) + list(range(idx + 1, idx + C + 1)) # context word idx 上下文單詞的在word_sequence中的下標(biāo)# word_sequence[i]分別對(duì)應(yīng)'jack', 'like' ,'dog', 'jack' 對(duì)應(yīng)的word2idx索引為0,1,0,1context = [word2idx[word_sequence[i]] for i in context_idx]for w in context:skip_grams.append([center, w])# skip_grams:[[2,0],[2,1],[2,0],[2,1]]def make_data(skip_grams):input_data = []output_data = []for i in range(len(skip_grams)):# eye 是單位矩陣,維度為voc_size 以skip_grams[i][0]的值為下標(biāo)取出在單位矩陣對(duì)應(yīng)的行向量# 舉例 [2,0] skip_grams[0][0]為2,即取出單位矩陣的第三行(相當(dāng)于把單詞用onehot表示)input_data.append(np.eye(voc_size)[skip_grams[i][0]])# 標(biāo)簽值 output_data 即為skip_grams[i][1]:0output_data.append(skip_grams[i][1])return input_data, output_datainput_data, output_data = make_data(skip_grams) input_data, output_data = torch.Tensor(input_data), torch.LongTensor(output_data) dataset = Data.TensorDataset(input_data, output_data) loader = Data.DataLoader(dataset, batch_size, True)# Model class Word2Vec(nn.Module):def __init__(self):super(Word2Vec, self).__init__()# W and V is not Traspose relationship# 技巧:先確定輸入輸出的shape,再來(lái)推出超參數(shù)的shape# 下面就是先確定輸入X和隱藏層輸出的shape,再來(lái)反推W# 因?yàn)閄 : [batch_size, voc_size],隱藏層需要輸出[batch_size, embedding_size],所以W應(yīng)該是[voc_size,embedding_size]self.W = nn.Parameter(torch.randn(voc_size, embedding_size).type(dtype))# V 同理self.V = nn.Parameter(torch.randn(embedding_size, voc_size).type(dtype))def forward(self, X):# X : [batch_size, voc_size] one-hot batch_size就相當(dāng)于圖中的minibatch行,圖中有voc_size列# torch.mm only for 2 dim matrix, but torch.matmul can use to any dimhidden_layer = torch.matmul(X, self.W) # hidden_layer : [batch_size, embedding_size]# 相當(dāng)于有voc_size個(gè)分類(lèi),即詞典里面的每個(gè)詞都是一個(gè)種類(lèi)output_layer = torch.matmul(hidden_layer, self.V) # output_layer : [batch_size, voc_size]return output_layermodel = Word2Vec().to(device) criterion = nn.CrossEntropyLoss().to(device) optimizer = optim.Adam(model.parameters(), lr=1e-3)# Training for epoch in range(2000):for i, (batch_x, batch_y) in enumerate(loader):batch_x = batch_x.to(device)batch_y = batch_y.to(device)pred = model(batch_x)loss = criterion(pred, batch_y)if (epoch + 1) % 1000 == 0:print(epoch + 1, i, loss.item())optimizer.zero_grad() loss.backward()optimizer.step()for i, label in enumerate(vocab):W, WT = model.parameters() # WT就是self.V x,y = float(W[i][0]), float(W[i][1]) # embedding_size = 2 print(label)print(x,y)plt.scatter(x, y)plt.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points', ha='right', va='bottom') plt.show() 1000 0 2.187922716140747 1000 1 2.1874611377716064 1000 2 2.1020612716674805 1000 3 2.1360023021698 1000 4 1.6479374170303345 1000 5 2.1080777645111084 1000 6 2.117255687713623 1000 7 2.5754618644714355 1000 8 2.375575065612793 1000 9 2.4812772274017334 1000 10 2.2279186248779297 1000 11 1.9958131313323975 1000 12 1.9666472673416138 1000 13 1.792773723602295 1000 14 1.9790289402008057 1000 15 2.150097370147705 1000 16 1.8230916261672974 1000 17 1.9916845560073853 1000 18 2.2354393005371094 1000 19 2.253058910369873 1000 20 1.8957509994506836 2000 0 2.1660408973693848 2000 1 1.9071791172027588 2000 2 1.9131343364715576 2000 3 2.0996546745300293 2000 4 1.9192123413085938 2000 5 1.6349347829818726 2000 6 2.433778762817383 2000 7 2.4247307777404785 2000 8 2.1594560146331787 2000 9 1.9543298482894897 2000 10 1.8078333139419556 2000 11 2.490055561065674 2000 12 2.1941933631896973 2000 13 2.463453531265259 2000 14 2.2849888801574707 2000 15 1.7784088850021362 2000 16 1.8803404569625854 2000 17 1.9645321369171143 2000 18 2.036078453063965 2000 19 1.9239177703857422 2000 20 2.261594772338867 animal -0.5263756513595581 3.4223508834838867 apple -0.3384515941143036 1.3274422883987427 milk -1.2358342409133911 0.3438951075077057 hate -1.556404709815979 9.134812355041504 music 0.31392836570739746 0.2262829840183258 movie 2.375382661819458 1.1577153205871582 dog -0.9016568064689636 0.2671743929386139 jack -0.5878503322601318 0.6020950078964233 cat -0.9074932932853699 0.2849980890750885 banana 0.47850462794303894 1.1545497179031372 book 0.4761728048324585 0.21939511597156525 like -0.1496874839067459 0.6957748532295227 fish -2.37762188911438 0.04009028896689415

因?yàn)閿?shù)據(jù)集 jack like 動(dòng)物名 比較多,所以這幾個(gè)詞在空間中也挨得比較近

sentences = ["jack like dog", "jack like cat", "jack like animal","dog cat animal", "banana apple cat dog like", "dog fish milk like","dog cat animal like", "jack like apple", "apple like", "jack like banana","apple banana jack movie book music like", "cat dog hate", "cat dog like"]

for epoch in range(10000):

總結(jié)

以上是生活随笔為你收集整理的【NLP】word2vec 模型的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。