日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

KNN针对中文文本分类

發(fā)布時(shí)間:2023/12/20 编程问答 23 豆豆
生活随笔 收集整理的這篇文章主要介紹了 KNN针对中文文本分类 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

改編自博客:

http://blog.csdn.net/github_36326955/article/details/54891204

?

?

做個(gè)筆記

代碼按照1 2 3 4的順序進(jìn)行即可:

1.py(corpus_segment.py)

[python] view plain copy
  • #!/usr/bin/env?python??
  • #?-*-?coding:?UTF-8?-*-??
  • """?
  • @version:?python2.7.8??
  • @author:?XiangguoSun?
  • @contact:?sunxiangguodut@qq.com?
  • @file:?corpus_segment.py?
  • @time:?2017/2/5?15:28?
  • @software:?PyCharm?
  • """??
  • import?sys??
  • import?os??
  • import?jieba??
  • #?配置utf-8輸出環(huán)境??
  • reload(sys)??
  • sys.setdefaultencoding('utf-8')??
  • #?保存至文件??
  • def?savefile(savepath,?content):??
  • ????with?open(savepath,?"wb")?as?fp:??
  • ????????fp.write(content)??
  • ????'''''?
  • ????上面兩行是python2.6以上版本增加的語法,省略了繁瑣的文件close和try操作?
  • ????2.5版本需要from?__future__?import?with_statement?
  • ? ?
  • ????'''??
  • #?讀取文件??
  • def?readfile(path):??
  • ????with?open(path,?"rb")?as?fp:??
  • ????????content?=?fp.read()??
  • ????return?content??
  • ??
  • def?corpus_segment(corpus_path,?seg_path):??
  • ????'''''?
  • ????corpus_path是未分詞語料庫路徑?
  • ????seg_path是分詞后語料庫存儲(chǔ)路徑?
  • ????'''??
  • ????catelist?=?os.listdir(corpus_path)??#?獲取corpus_path下的所有子目錄??
  • ????'''''?
  • ????其中子目錄的名字就是類別名,例如:?
  • ????train_corpus/art/21.txt中,'train_corpus/'是corpus_path,'art'是catelist中的一個(gè)成員?
  • ????'''??
  • ??
  • ????#?獲取每個(gè)目錄(類別)下所有的文件??
  • ????for?mydir?in?catelist:??
  • ????????'''''?
  • ????????這里mydir就是train_corpus/art/21.txt中的art(即catelist中的一個(gè)類別)?
  • ????????'''??
  • ????????class_path?=?corpus_path?+?mydir?+?"/"??#?拼出分類子目錄的路徑如:train_corpus/art/??
  • ????????seg_dir?=?seg_path?+?mydir?+?"/"??#?拼出分詞后存貯的對(duì)應(yīng)目錄路徑如:train_corpus_seg/art/??
  • ??
  • ????????if?not?os.path.exists(seg_dir):??#?是否存在分詞目錄,如果沒有則創(chuàng)建該目錄??
  • ????????????os.makedirs(seg_dir)??
  • ??
  • ????????file_list?=?os.listdir(class_path)??#?獲取未分詞語料庫中某一類別中的所有文本??
  • ????????'''''?
  • ????????train_corpus/art/中的?
  • ????????21.txt,?
  • ????????22.txt,?
  • ????????23.txt?
  • ????????...?
  • ????????file_list=['21.txt','22.txt',...]?
  • ????????'''??
  • ????????for?file_path?in?file_list:??#?遍歷類別目錄下的所有文件??
  • ????????????fullname?=?class_path?+?file_path??#?拼出文件名全路徑如:train_corpus/art/21.txt??
  • ????????????content?=?readfile(fullname)??#?讀取文件內(nèi)容??
  • ????????????'''''此時(shí),content里面存貯的是原文本的所有字符,例如多余的空格、空行、回車等等,?
  • ????????????接下來,我們需要把這些無關(guān)痛癢的字符統(tǒng)統(tǒng)去掉,變成只有標(biāo)點(diǎn)符號(hào)做間隔的緊湊的文本內(nèi)容?
  • ????????????'''??
  • ????????????content?=?content.replace("\r\n",?"")??#?刪除換行??
  • ????????????content?=?content.replace("?",?"")#刪除空行、多余的空格??
  • ????????????content_seg?=?jieba.cut(content)??#?為文件內(nèi)容分詞??
  • ????????????savefile(seg_dir?+?file_path,?"?".join(content_seg))??#?將處理后的文件保存到分詞后語料目錄??
  • ??
  • ????print?"中文語料分詞結(jié)束!!!"??
  • ??
  • '''''?
  • 如果你對(duì)if?__name__=="__main__":這句不懂,可以參考下面的文章?
  • http://imoyao.lofter.com/post/3492bc_bd0c4ce?
  • 簡(jiǎn)單來說如果其他python文件調(diào)用這個(gè)文件的函數(shù),或者把這個(gè)文件作為模塊?
  • 導(dǎo)入到你的工程中時(shí),那么下面的代碼將不會(huì)被執(zhí)行,而如果單獨(dú)在命令行中?
  • 運(yùn)行這個(gè)文件,或者在IDE(如pycharm)中運(yùn)行這個(gè)文件時(shí)候,下面的代碼才會(huì)運(yùn)行。?
  • 即,這部分代碼相當(dāng)于一個(gè)功能測(cè)試。?
  • 如果你還沒懂,建議你放棄IT這個(gè)行業(yè)。?
  • '''??
  • if?__name__=="__main__":??
  • ????#對(duì)訓(xùn)練集進(jìn)行分詞??
  • ????corpus_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/train/"??#?未分詞分類語料庫路徑??
  • ????seg_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/train_corpus_seg/"??#?分詞后分類語料庫路徑,本程序輸出結(jié)果??
  • ????corpus_segment(corpus_path,seg_path)??
  • ??
  • ????#對(duì)測(cè)試集進(jìn)行分詞??
  • ????corpus_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/answer/"??#?未分詞分類語料庫路徑??
  • ????seg_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/test_corpus_seg/"??#?分詞后分類語料庫路徑,本程序輸出結(jié)果??
  • ????corpus_segment(corpus_path,seg_path)??

  • 2.py(corpus2Bunch.py)

    [python] view plain copy
  • #!/usr/bin/env?python??
  • #?-*-?coding:?UTF-8?-*-??
  • """?
  • @version:?python2.7.8??
  • @author:?XiangguoSun?
  • @contact:?sunxiangguodut@qq.com?
  • @file:?corpus2Bunch.py?
  • @time:?2017/2/7?7:41?
  • @software:?PyCharm?
  • """??
  • import?sys??
  • reload(sys)??
  • sys.setdefaultencoding('utf-8')??
  • import?os#python內(nèi)置的包,用于進(jìn)行文件目錄操作,我們將會(huì)用到os.listdir函數(shù)??
  • import?cPickle?as?pickle#導(dǎo)入cPickle包并且取一個(gè)別名pickle??
  • '''''?
  • 事實(shí)上python中還有一個(gè)也叫作pickle的包,與這里的名字相同了,無所謂?
  • 關(guān)于cPickle與pickle,請(qǐng)參考博主另一篇博文:?
  • python核心模塊之pickle和cPickle講解?
  • http://blog.csdn.net/github_36326955/article/details/54882506?
  • 本文件代碼下面會(huì)用到cPickle中的函數(shù)cPickle.dump?
  • '''??
  • from?sklearn.datasets.base?import?Bunch??
  • #這個(gè)您無需做過多了解,您只需要記住以后導(dǎo)入Bunch數(shù)據(jù)結(jié)構(gòu)就像這樣就可以了。??
  • #今后的博文會(huì)對(duì)sklearn做更有針對(duì)性的講解??
  • ??
  • ??
  • def?_readfile(path):??
  • ????'''''讀取文件'''??
  • ????#函數(shù)名前面帶一個(gè)_,是標(biāo)識(shí)私有函數(shù)??
  • ????#?僅僅用于標(biāo)明而已,不起什么作用,??
  • ????#?外面想調(diào)用還是可以調(diào)用,??
  • ????#?只是增強(qiáng)了程序的可讀性??
  • ????with?open(path,?"rb")?as?fp:#with?as句法前面的代碼已經(jīng)多次介紹過,今后不再注釋??
  • ????????content?=?fp.read()??
  • ????return?content??
  • ??
  • def?corpus2Bunch(wordbag_path,seg_path):??
  • ????catelist?=?os.listdir(seg_path)#?獲取seg_path下的所有子目錄,也就是分類信息??
  • ????#創(chuàng)建一個(gè)Bunch實(shí)例??
  • ????bunch?=?Bunch(target_name=[],?label=[],?filenames=[],?contents=[])??
  • ????bunch.target_name.extend(catelist)??
  • ????'''''?
  • ????extend(addlist)是python?list中的函數(shù),意思是用新的list(addlist)去擴(kuò)充?
  • ????原來的list?
  • ????'''??
  • ????#?獲取每個(gè)目錄下所有的文件??
  • ????for?mydir?in?catelist:??
  • ????????class_path?=?seg_path?+?mydir?+?"/"??#?拼出分類子目錄的路徑??
  • ????????file_list?=?os.listdir(class_path)??#?獲取class_path下的所有文件??
  • ????????for?file_path?in?file_list:??#?遍歷類別目錄下文件??
  • ????????????fullname?=?class_path?+?file_path??#?拼出文件名全路徑??
  • ????????????bunch.label.append(mydir)??
  • ????????????bunch.filenames.append(fullname)??
  • ????????????bunch.contents.append(_readfile(fullname))??#?讀取文件內(nèi)容??
  • ????????????'''''append(element)是python?list中的函數(shù),意思是向原來的list中添加element,注意與extend()函數(shù)的區(qū)別'''??
  • ????#?將bunch存儲(chǔ)到wordbag_path路徑中??
  • ????with?open(wordbag_path,?"wb")?as?file_obj:??
  • ????????pickle.dump(bunch,?file_obj)??
  • ????print?"構(gòu)建文本對(duì)象結(jié)束!!!"??
  • ??
  • if?__name__?==?"__main__":#這個(gè)語句前面的代碼已經(jīng)介紹過,今后不再注釋??
  • ????#對(duì)訓(xùn)練集進(jìn)行Bunch化操作:??
  • ????wordbag_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/train_word_bag/train_set.dat"??#?Bunch存儲(chǔ)路徑,程序輸出??
  • ????seg_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/train_corpus_seg/"??#?分詞后分類語料庫路徑,程序輸入??
  • ????corpus2Bunch(wordbag_path,?seg_path)??
  • ??
  • ????#?對(duì)測(cè)試集進(jìn)行Bunch化操作:??
  • ????wordbag_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/test_word_bag/test_set.dat"??#?Bunch存儲(chǔ)路徑,程序輸出??
  • ????seg_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/test_corpus_seg/"??#?分詞后分類語料庫路徑,程序輸入??
  • ????corpus2Bunch(wordbag_path,?seg_path)??
  • ?

    ?

    3.py(TFIDF_space.py)

    ?

    ?

    [python] view plain copy
  • #!/usr/bin/env?python??
  • #?-*-?coding:?UTF-8?-*-??
  • """?
  • @version:?python2.7.8??
  • @author:?XiangguoSun?
  • @contact:?sunxiangguodut@qq.com?
  • @file:?TFIDF_space.py?
  • @time:?2017/2/8?11:39?
  • @software:?PyCharm?
  • """??
  • import?sys??
  • reload(sys)??
  • sys.setdefaultencoding('utf-8')??
  • ??
  • from?sklearn.datasets.base?import?Bunch??
  • import?cPickle?as?pickle??
  • from?sklearn.feature_extraction.text?import?TfidfVectorizer??
  • ??
  • def?_readfile(path):??
  • ????with?open(path,?"rb")?as?fp:??
  • ????????content?=?fp.read()??
  • ????return?content??
  • ??
  • def?_readbunchobj(path):??
  • ????with?open(path,?"rb")?as?file_obj:??
  • ????????bunch?=?pickle.load(file_obj)??
  • ????return?bunch??
  • ??
  • def?_writebunchobj(path,?bunchobj):??
  • ????with?open(path,?"wb")?as?file_obj:??
  • ????????pickle.dump(bunchobj,?file_obj)??
  • ??
  • def?vector_space(stopword_path,bunch_path,space_path,train_tfidf_path=None):??
  • ??
  • ????stpwrdlst?=?_readfile(stopword_path).splitlines()??
  • ????bunch?=?_readbunchobj(bunch_path)??
  • ????tfidfspace?=?Bunch(target_name=bunch.target_name,?label=bunch.label,?filenames=bunch.filenames,?tdm=[],?vocabulary={})??
  • ??
  • ????if?train_tfidf_path?is?not?None:??
  • ????????trainbunch?=?_readbunchobj(train_tfidf_path)??
  • ????????tfidfspace.vocabulary?=?trainbunch.vocabulary??
  • ????????vectorizer?=?TfidfVectorizer(stop_words=stpwrdlst,?sublinear_tf=True,?max_df=0.5,vocabulary=trainbunch.vocabulary)??
  • ????????tfidfspace.tdm?=?vectorizer.fit_transform(bunch.contents)??
  • ??
  • ????else:??
  • ????????vectorizer?=?TfidfVectorizer(stop_words=stpwrdlst,?sublinear_tf=True,?max_df=0.5)??
  • ????????tfidfspace.tdm?=?vectorizer.fit_transform(bunch.contents)??
  • ????????tfidfspace.vocabulary?=?vectorizer.vocabulary_??
  • ??
  • ????_writebunchobj(space_path,?tfidfspace)??
  • ????print?"tf-idf詞向量空間實(shí)例創(chuàng)建成功!!!"??
  • ??
  • if?__name__?==?'__main__':??
  • ??
  • ????#?stopword_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204/chinese_text_classification-master/train_word_bag/hlt_stop_words.txt"#輸入的文件??
  • ????#?bunch_path?=?"train_word_bag/train_set.dat"#輸入的文件??
  • ????#?space_path?=?"train_word_bag/tfdifspace.dat"#輸出的文件??
  • ????#?vector_space(stopword_path,bunch_path,space_path)??
  • ????#??
  • ????#?bunch_path?=?"test_word_bag/test_set.dat"#輸入的文件??
  • ????#?space_path?=?"test_word_bag/testspace.dat"??
  • ????#?train_tfidf_path="train_word_bag/tfdifspace.dat"??
  • ????#?vector_space(stopword_path,bunch_path,space_path,train_tfidf_path)??
  • ??
  • ????stopword_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/train_word_bag/hlt_stop_words.txt"#輸入的文件??
  • ??
  • ????train_bunch_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/train_word_bag/train_set.dat"#輸入的文件??
  • ????space_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/train_word_bag/tfidfspace.dat"#輸出的文件??
  • ????vector_space(stopword_path,train_bunch_path,space_path)??
  • ??
  • ????train_tfidf_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/train_word_bag/tfidfspace.dat"??#?輸入的文件,由上面生成??
  • ????test_bunch_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/test_word_bag/test_set.dat"#輸入的文件??
  • ????test_space_path?=?"/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204_tenwhy/chinese_text_classification-master/test_word_bag/testspace.dat"#輸出的文件??
  • ??
  • ????vector_space(stopword_path,test_bunch_path,test_space_path,train_tfidf_path)?
  • 4.py

    ?

    #!/usr/bin/env python # -*- coding: UTF-8 -*- """ @version: python2.7.8 @author: XiangguoSun @contact: sunxiangguodut@qq.com @file: NBayes_Predict.py @time: 2017/2/8 12:21 @software: PyCharm """ import sys reload(sys) sys.setdefaultencoding('utf-8')import cPickle as pickle from sklearn.naive_bayes import MultinomialNB # 導(dǎo)入多項(xiàng)式貝葉斯算法# 讀取bunch對(duì)象 def _readbunchobj(path):with open(path, "rb") as file_obj:bunch = pickle.load(file_obj)return bunch# 導(dǎo)入訓(xùn)練集 trainpath = "/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204/chinese_text_classification-master/train_word_bag/tfidfspace.dat" train_set = _readbunchobj(trainpath)# 導(dǎo)入測(cè)試集 testpath = "/home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204/chinese_text_classification-master/test_word_bag/testspace.dat" test_set = _readbunchobj(testpath)# 訓(xùn)練分類器:輸入詞袋向量和分類標(biāo)簽,alpha:0.001 alpha越小,迭代次數(shù)越多,精度越高 # clf = MultinomialNB(alpha=0.1).fit(train_set.tdm, train_set.label)###################################################### #KNN Classifier from sklearn.neighbors import KNeighborsClassifier print '*************************\nKNN\n*************************' clf = KNeighborsClassifier()#default with k=5 clf.fit(train_set.tdm, train_set.label)# 預(yù)測(cè)分類結(jié)果 predicted = clf.predict(test_set.tdm)for flabel,file_name,expct_cate in zip(test_set.label,test_set.filenames,predicted):if flabel != expct_cate:print file_name,": 實(shí)際類別:",flabel," -->預(yù)測(cè)類別:",expct_cateprint "預(yù)測(cè)完畢!!!"# 計(jì)算分類精度: from sklearn import metrics def metrics_result(actual, predict):print '精度:{0:.3f}'.format(metrics.precision_score(actual, predict,average='weighted'))print '召回:{0:0.3f}'.format(metrics.recall_score(actual, predict,average='weighted'))print 'f1-score:{0:.3f}'.format(metrics.f1_score(actual, predict,average='weighted'))metrics_result(test_set.label, predicted)


    依然使用復(fù)旦大學(xué)的新聞數(shù)據(jù)集

    ?

    運(yùn)行結(jié)果(這里復(fù)制一部分):

    /home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204/chinese_text_classification-master/test_corpus_seg/C16-Electronics/C16-Electronics37.txt : 實(shí)際類別: C16-Electronics? -->預(yù)測(cè)類別: C11-Space
    /home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204/chinese_text_classification-master/test_corpus_seg/C16-Electronics/C16-Electronics19.txt : 實(shí)際類別: C16-Electronics? -->預(yù)測(cè)類別: C34-Economy
    /home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204/chinese_text_classification-master/test_corpus_seg/C16-Electronics/C16-Electronics35.txt : 實(shí)際類別: C16-Electronics? -->預(yù)測(cè)類別: C39-Sports
    /home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204/chinese_text_classification-master/test_corpus_seg/C16-Electronics/C16-Electronics31.txt : 實(shí)際類別: C16-Electronics? -->預(yù)測(cè)類別: C11-Space
    /home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204/chinese_text_classification-master/test_corpus_seg/C16-Electronics/C16-Electronics52.txt : 實(shí)際類別: C16-Electronics? -->預(yù)測(cè)類別: C17-Communication
    /home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204/chinese_text_classification-master/test_corpus_seg/C16-Electronics/C16-Electronics07.txt : 實(shí)際類別: C16-Electronics? -->預(yù)測(cè)類別: C17-Communication
    /home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204/chinese_text_classification-master/test_corpus_seg/C16-Electronics/C16-Electronics02.txt : 實(shí)際類別: C16-Electronics? -->預(yù)測(cè)類別: C34-Economy
    /home/appleyuchi/PycharmProjects/MultiNB/csdn_blog/54891204/chinese_text_classification-master/test_corpus_seg/C16-Electronics/C16-Electronics48.txt : 實(shí)際類別: C16-Electronics? -->預(yù)測(cè)類別: C34-Economy
    預(yù)測(cè)完畢!!!
    精度:0.890
    召回:0.893
    f1-score:0.886

    ?

    總結(jié)

    以上是生活随笔為你收集整理的KNN针对中文文本分类的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。