日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

word2vec中文类似词计算和聚类的使用说明及c语言源代码

發布時間:2024/9/21 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 word2vec中文类似词计算和聚类的使用说明及c语言源代码 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
word2vec相關基礎知識、下載安裝參考前文:word2vec詞向量中文文本相似度計算
文件夾:
  • word2vec使用說明及源代碼介紹
    • 1.下載地址
    • 2.中文語料
    • 3.參數介紹
    • 4.計算相似詞語
    • 5.三個詞預測語義語法關系
    • 6.關鍵詞聚類


1、下載地址


官網C語言下載地址:
http://word2vec.googlecode.com/svn/trunk/


執行 make 編譯word2vec工具:

Makefile的編譯代碼在makefile.txt文件里,先改名makefile.txt 為Makefile,然后在當前文件夾下運行make進行編譯,生成可運行文件(編譯過程中報出非常出Warning,gcc不支持pthread多線程命令。凝視就可以)。
再執行演示樣例腳本:./demo-word.sh 和 ./demo-phrases.sh:
a). 從http://mattmahoney.net/dc/text8.zip 在線下載了一個文件text8 ( 一個解壓后不到100M的txt文件,可自己下載并解壓放到同級文件夾下)??商鎿Q為自己的語料
b). 運行word2vec生成詞向量到 vectors.bin文件里
c). 假設運行 sh demo-word.sh 訓練生成vectors.bin文件后,下次能夠直接調用已經訓練好的詞向量,如命令?./distance vectors.bin



2、中文語料


語料是我使用Selenium爬取的三大百科(百度、互動、維基)文本信息。當中每一個百科有100個國家。總共300個國家(0001.txt~0300.txt),然后使用Jieba工具進行中文分詞處理。

最后輸出Result_Country.txt文件。它把全部文本合并。共300行,每行相應一個國家的分詞文本信息。




3、參數介紹


下圖參數源自文章:Windows下使用Word2vec繼續詞向量訓練 - 一僅僅鳥的天空
Java推薦參考文章:word2vec使用指導



demo-word.sh文件,參考:http://jacoxu.com/?p=1084
make #if [ ! -e text8 ]; then # wget http://mattmahoney.net/dc/text8.zip -O text8.gz # gzip -d text8.gz -f #fi time ./word2vec -train Result_Country.txt -output vectors.bin -cbow 1 -size 200 -window 8 -negative 25 -hs 0 -sample 1e-4 -threads 20 -binary 1 -iter 15 ./distance vectors.bin 詳細命令解釋例如以下:
-train Result_Country.txt 表示的是輸入文件是Result_Country.txt
-output vectors.bin 輸出文件是vectors.bin
-cbow 0 表示不使用cbow模型,默覺得Skip-Gram模型
-size 200 每一個單詞的向量維度是200
-window 8 訓練的窗體大小為8,就是考慮一個詞前八個和后八個詞語(實際代碼中另一個隨機選窗體的過程,窗體大小小于等于5)
-negative 0?表示是否使用NEG方,0表示不使用
-hs 1 是否使用HS方法,0表示不使用,1表示使用HS方法
-sample 指的是採樣的閾值,假設一個詞語在訓練樣本中出現的頻率越大。那么就越會被採樣
-binary 1 為1指的是結果二進制存儲,為0是普通存儲(普通存儲的時候是能夠打開看到詞語和相應的向量的)

除了以上命令中的參數,word2vec還有幾個參數對我們比較實用比方:

-alpha 設置學習速率。默認的為0.025
–min-count 設置最低頻率,默認是5。假設一個詞語在文檔中出現的次數小于5。那么就會丟棄
-classes 設置聚類個數,看了一下源代碼用的是k-means聚類的方法

要注意-threads 20 線程數也會對結果產生影響。


4、計算相似詞語


命令:sh?demo-word.sh
demo-word.sh 中指令:
make #if [ ! -e text8 ]; then # wget http://mattmahoney.net/dc/text8.zip -O text8.gz # gzip -d text8.gz -f #fi time ./word2vec -train Result_Country.txt -output vectors.bin -cbow 1 -size 200 -window 8 -negative 25 -hs 0 -sample 1e-4 -threads 20 -binary 1 -iter 15 ./distance vectors.bin 執行結果例如以下圖所看到的:


假設想要不訓練調用上次訓練的vectors.bin文件。則輸入 ./distance vectors.bin
輸入"阿富汗"輸出相似次及相似距離,如"喀布爾"阿富汗首都,"坎大哈"阿富汗城市,類似中東國家"伊拉克"等。


輸入"國歌"輸出相似詞例如以下圖所看到的:


不只名詞能夠獲取相似詞,動詞也能夠。如輸入"位于",輸出例如以下:


distance.c?源代碼:
// Copyright 2013 Google Inc. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License.#include <stdio.h> #include <string.h> #include <math.h> #include <malloc.h>const long long max_size = 2000; // max length of strings const long long N = 40; // number of closest words that will be shown const long long max_w = 50; // max length of vocabulary entriesint main(int argc, char **argv) {FILE *f;char st1[max_size];char *bestw[N];char file_name[max_size], st[100][max_size];float dist, len, bestd[N], vec[max_size];long long words, size, a, b, c, d, cn, bi[100];char ch;float *M;char *vocab;if (argc < 2) {printf("Usage: ./distance <FILE>\nwhere FILE contains word projections in the BINARY FORMAT\n");return 0;}strcpy(file_name, argv[1]);f = fopen(file_name, "rb");if (f == NULL) {printf("Input file not found\n");return -1;}fscanf(f, "%lld", &words);fscanf(f, "%lld", &size);vocab = (char *)malloc((long long)words * max_w * sizeof(char));for (a = 0; a < N; a++) bestw[a] = (char *)malloc(max_size * sizeof(char));M = (float *)malloc((long long)words * (long long)size * sizeof(float));if (M == NULL) {printf("Cannot allocate memory: %lld MB %lld %lld\n", (long long)words * size * sizeof(float) / 1048576, words, size);return -1;}for (b = 0; b < words; b++) {a = 0;while (1) {vocab[b * max_w + a] = fgetc(f);if (feof(f) || (vocab[b * max_w + a] == ' ')) break;if ((a < max_w) && (vocab[b * max_w + a] != '\n')) a++;}vocab[b * max_w + a] = 0;for (a = 0; a < size; a++) fread(&M[a + b * size], sizeof(float), 1, f);len = 0;for (a = 0; a < size; a++) len += M[a + b * size] * M[a + b * size];len = sqrt(len);for (a = 0; a < size; a++) M[a + b * size] /= len;}fclose(f);while (1) {for (a = 0; a < N; a++) bestd[a] = 0;for (a = 0; a < N; a++) bestw[a][0] = 0;printf("Enter word or sentence (EXIT to break): ");a = 0;while (1) {st1[a] = fgetc(stdin);if ((st1[a] == '\n') || (a >= max_size - 1)) {st1[a] = 0;break;}a++;}if (!strcmp(st1, "EXIT")) break;cn = 0;b = 0;c = 0;while (1) {st[cn][b] = st1[c];b++;c++;st[cn][b] = 0;if (st1[c] == 0) break;if (st1[c] == ' ') {cn++;b = 0;c++;}}cn++;for (a = 0; a < cn; a++) {for (b = 0; b < words; b++) if (!strcmp(&vocab[b * max_w], st[a])) break;if (b == words) b = -1;bi[a] = b;printf("\nWord: %s Position in vocabulary: %lld\n", st[a], bi[a]);if (b == -1) {printf("Out of dictionary word!\n");break;}}if (b == -1) continue;printf("\n Word Cosine distance\n------------------------------------------------------------------------\n");for (a = 0; a < size; a++) vec[a] = 0;for (b = 0; b < cn; b++) {if (bi[b] == -1) continue;for (a = 0; a < size; a++) vec[a] += M[a + bi[b] * size];}len = 0;for (a = 0; a < size; a++) len += vec[a] * vec[a];len = sqrt(len);for (a = 0; a < size; a++) vec[a] /= len;for (a = 0; a < N; a++) bestd[a] = -1;for (a = 0; a < N; a++) bestw[a][0] = 0;for (c = 0; c < words; c++) {a = 0;for (b = 0; b < cn; b++) if (bi[b] == c) a = 1;if (a == 1) continue;dist = 0;for (a = 0; a < size; a++) dist += vec[a] * M[a + c * size];for (a = 0; a < N; a++) {if (dist > bestd[a]) {for (d = N - 1; d > a; d--) {bestd[d] = bestd[d - 1];strcpy(bestw[d], bestw[d - 1]);}bestd[a] = dist;strcpy(bestw[a], &vocab[c * max_w]);break;}}}for (a = 0; a < N; a++) printf("%50s\t\t%f\n", bestw[a], bestd[a]);}return 0; }


5、三個詞預測語義語法關系


命令:sh?demo-analogy.sh

demo-analogy.sh 中指令:
make #if [ ! -e text8 ]; then # wget http://mattmahoney.net/dc/text8.zip -O text8.gz # gzip -d text8.gz -f #fi echo ------------------------------------------------------------------------------------- echo Note that for the word analogy to perform well, the model should be trained on much larger data set echo Example input: paris france berlin echo ------------------------------------------------------------------------------------- time ./word2vec -train Result_Country.txt -output vectors.bin -cbow 1 -size 200 -window 8 -negative 25 -hs 0 -sample 1e-4 -threads 20 -binary 1 -iter 15 ./word-analogy vectors.bin執行結果例如以下圖所看到的:



輸入"韓國、首爾、日本"能夠預測其首都"東京":
韓國的首都是首爾 ?<==> ?日本的首都是東京



輸入"中國 亞洲 德國"能夠預測語義語法關系"歐洲":
中國位于亞洲 <==> 德國位于歐洲



假設輸入只2個詞體會提示錯誤。同一時候輸入"EXIT"可推出繼續輸入。


word-analogy.c?源代碼:

// Copyright 2013 Google Inc. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License.#include <stdio.h> #include <string.h> #include <math.h> #include <malloc.h>const long long max_size = 2000; // max length of strings const long long N = 40; // number of closest words that will be shown const long long max_w = 50; // max length of vocabulary entriesint main(int argc, char **argv) {FILE *f;char st1[max_size];char bestw[N][max_size];char file_name[max_size], st[100][max_size];float dist, len, bestd[N], vec[max_size];long long words, size, a, b, c, d, cn, bi[100];char ch;float *M;char *vocab;if (argc < 2) {printf("Usage: ./word-analogy <FILE>\nwhere FILE contains word projections in the BINARY FORMAT\n");return 0;}strcpy(file_name, argv[1]);f = fopen(file_name, "rb");if (f == NULL) {printf("Input file not found\n");return -1;}fscanf(f, "%lld", &words);fscanf(f, "%lld", &size);vocab = (char *)malloc((long long)words * max_w * sizeof(char));M = (float *)malloc((long long)words * (long long)size * sizeof(float));if (M == NULL) {printf("Cannot allocate memory: %lld MB %lld %lld\n", (long long)words * size * sizeof(float) / 1048576, words, size);return -1;}for (b = 0; b < words; b++) {a = 0;while (1) {vocab[b * max_w + a] = fgetc(f);if (feof(f) || (vocab[b * max_w + a] == ' ')) break;if ((a < max_w) && (vocab[b * max_w + a] != '\n')) a++;}vocab[b * max_w + a] = 0;for (a = 0; a < size; a++) fread(&M[a + b * size], sizeof(float), 1, f);len = 0;for (a = 0; a < size; a++) len += M[a + b * size] * M[a + b * size];len = sqrt(len);for (a = 0; a < size; a++) M[a + b * size] /= len;}fclose(f);while (1) {for (a = 0; a < N; a++) bestd[a] = 0;for (a = 0; a < N; a++) bestw[a][0] = 0;printf("Enter three words (EXIT to break): ");a = 0;while (1) {st1[a] = fgetc(stdin);if ((st1[a] == '\n') || (a >= max_size - 1)) {st1[a] = 0;break;}a++;}if (!strcmp(st1, "EXIT")) break;cn = 0;b = 0;c = 0;while (1) {st[cn][b] = st1[c];b++;c++;st[cn][b] = 0;if (st1[c] == 0) break;if (st1[c] == ' ') {cn++;b = 0;c++;}}cn++;if (cn < 3) {printf("Only %lld words were entered.. three words are needed at the input to perform the calculation\n", cn);continue;}for (a = 0; a < cn; a++) {for (b = 0; b < words; b++) if (!strcmp(&vocab[b * max_w], st[a])) break;if (b == words) b = 0;bi[a] = b;printf("\nWord: %s Position in vocabulary: %lld\n", st[a], bi[a]);if (b == 0) {printf("Out of dictionary word!\n");break;}}if (b == 0) continue;printf("\n Word Distance\n------------------------------------------------------------------------\n");for (a = 0; a < size; a++) vec[a] = M[a + bi[1] * size] - M[a + bi[0] * size] + M[a + bi[2] * size];len = 0;for (a = 0; a < size; a++) len += vec[a] * vec[a];len = sqrt(len);for (a = 0; a < size; a++) vec[a] /= len;for (a = 0; a < N; a++) bestd[a] = 0;for (a = 0; a < N; a++) bestw[a][0] = 0;for (c = 0; c < words; c++) {if (c == bi[0]) continue;if (c == bi[1]) continue;if (c == bi[2]) continue;a = 0;for (b = 0; b < cn; b++) if (bi[b] == c) a = 1;if (a == 1) continue;dist = 0;for (a = 0; a < size; a++) dist += vec[a] * M[a + c * size];for (a = 0; a < N; a++) {if (dist > bestd[a]) {for (d = N - 1; d > a; d--) {bestd[d] = bestd[d - 1];strcpy(bestw[d], bestw[d - 1]);}bestd[a] = dist;strcpy(bestw[a], &vocab[c * max_w]);break;}}}for (a = 0; a < N; a++) printf("%50s\t\t%f\n", bestw[a], bestd[a]);}return 0; }

6、關鍵詞聚類


命令:sh?demo-classes.sh
demo-classes.sh 中指令:
make #if [ ! -e text8 ]; then # wget http://mattmahoney.net/dc/text8.zip -O text8.gz # gzip -d text8.gz -f #fi time ./word2vec -train Result_Country.txt -output classes.txt -cbow 1 -size 200 -window 8 -negative 25 -hs 0 -sample 1e-4 -threads 20 -iter 15 -classes 100 sort classes.txt -k 2 -n > classes.sorted.txt echo The word classes were saved to file classes.sorted.txt 執行結果例如以下圖所看到的:


當中生詞文件classes.txt和排序后的文件classes.sorted.txt:
聚類算法是Kmeans,類簇設置為100類。相應0~99,每類的關鍵詞例如以下??墒窃鯓佑嬎?00行數據每行相應的類標。還不太清楚~



當中聚類代碼見?word2vec.c 文件?void TrainModel() 函數:




demo-phrases.sh(word2phrase.c) 是就是將詞語拼成短語。
希望文章對你有所幫助,尤其是正在學習word2vec基礎文章的。


推薦文章:文本深度表示模型Word2Vec - 小唯THU大神
? ? ? ? ? ? ? ? ?利用中文數據跑Google開源項目word2vec - hebin大神
? ? ? ? ? ? ? ? ?Word2vec在事件挖掘中的調研 - 熱點事件推薦 (思路不錯)


(By:Eastmount 2016-02-20 深夜2點??http://blog.csdn.net/eastmount/?)


總結

以上是生活随笔為你收集整理的word2vec中文类似词计算和聚类的使用说明及c语言源代码的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。