日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

【Kaggle微课程】Natural Language Processing - 2.Text Classification

發布時間:2024/7/5 编程问答 26 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【Kaggle微课程】Natural Language Processing - 2.Text Classification 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

    • 1. bag of words
    • 2. 建立詞袋模型
    • 3. 訓練文本分類模型
    • 4. 預測
    • 練習:
      • 1. 評估方法
      • 2. 數據預處理、建模
      • 3. 訓練
      • 4. 預測
      • 5. 評估模型
      • 6. 改進

learn from https://www.kaggle.com/learn/natural-language-processing

NLP中的一個常見任務是文本分類。這是傳統機器學習意義上的“分類”,并應用于文本。

包括垃圾郵件檢測、情緒分析和標記客戶查詢。

在本教程中,您將學習使用spaCy進行文本分類。該分類器將檢測垃圾郵件,這是大多數電子郵件客戶端的常見功能。

  • 讀取數據
import pandas as pd spam = pd.read_csv("./spam.csv") spam.head(10)

1. bag of words

模型不能直接從原始文本中學習,需要轉化成數字特征,最簡單的方法是用 one-hot 編碼。

舉個例子:

  • 句子1 "Tea is life. Tea is love."
  • 句子2 "Tea is healthy, calming, and delicious."

忽略標點后的詞表是 {"tea", "is", "life", "love", "healthy", "calming", "and", "delicious"}

通過對每個句子的單詞出現的次數進行統計,用向量表示

v1=[2,2,1,1,0,0,0,0]v1=[2,2,1,1,0,0,0,0]v1=[2,2,1,1,0,0,0,0]
v2=[1,1,0,0,1,1,1,1]v2=[1,1,0,0,1,1,1,1]v2=[1,1,0,0,1,1,1,1]

這就是詞袋表示,相似的文檔將會有相似的詞袋向量

還有一種表示法,TF-IDF (Term Frequency - Inverse Document Frequency)

2. 建立詞袋模型

使用 spacy 的 TextCategorizer 可以處理詞袋的轉換,建立一個簡單的線性模型,它是一個 spacy 管道

import spacy nlp = spacy.blank('en') # 建立空模型# Create the TextCategorizer with exclusive classes # and "bow" architecture textcat = nlp.create_pipe('textcat',config={"exclusive_classes": True, # 排他的,二分類"architecture": "bow" })# Add the TextCategorizer to the empty model nlp.add_pipe(textcat) # help(nlp.create_pipe) Help on method create_pipe in module spacy.language:create_pipe(name, config={}) method of spacy.lang.en.English instanceCreate a pipeline component from a factory.name (unicode): Factory name to look up in `Language.factories`.config (dict): Configuration parameters to initialise component.RETURNS (callable): Pipeline component.DOCS: https://spacy.io/api/language#create_pipe # Add labels to text classifier textcat.add_label("ham") # 正常郵件 textcat.add_label("spam") # 垃圾郵件

3. 訓練文本分類模型

數據獲取

train_texts = spam['text'].values train_labels = [{'cats': {'ham': label == 'ham','spam': label == 'spam'}} for label in spam['label']]

將 文本 和 對應的標簽 打包

train_data = list(zip(train_texts, train_labels)) train_data[:3]

輸出:

[ ('Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...',{'cats': {'ham': True, 'spam': False}}),('Ok lar... Joking wif u oni...', {'cats': {'ham': True, 'spam': False}}),("Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's",{'cats': {'ham': False, 'spam': True}}) ]
  • 準備訓練模型
  • 創建優化器 optimizer nlp.begin_training(),spacy使用它更新模型權重
  • 數據分批 minibatch
  • 更新模型參數 nlp.update
  • from spacy.util import minibatchspacy.util.fix_random_seed(1) optimizer = nlp.begin_training()# 數據分批 batches = minibatch(train_data, size=8) # 迭代 for batch in batches:texts, labels = zip(*batch)nlp.update(texts, labels, sgd=optimizer)

    這只是一次 epoch

    >>> batch = [("1", True),("2", False)] >>> texts, labels = zip(*batch) >>> texts ('1', '2') >>> labels (True, False)

    https://www.runoob.com/python/python-func-zip.html

    • 多次 epochs 迭代
    import random random.seed(1) spacy.util.fix_random_seed(1) optimizer = nlp.begin_training()loss = {} for epoch in range(10):# 每次隨機打亂數據random.shuffle(train_data)# 數據分批batches = minibatch(train_data, size=8)# 迭代for batch in batches:texts, labels = zip(*batch)nlp.update(texts, labels, drop=0.3, sgd=optimizer, losses=loss)print(loss) # help(nlp.update) Help on method update in module spacy.language:update(docs, golds, drop=0.0, sgd=None, losses=None, component_cfg=None) method of spacy.lang.en.English instanceUpdate the models in the pipeline.docs (iterable): A batch of `Doc` objects.golds (iterable): A batch of `GoldParse` objects.drop (float): The dropout rate.sgd (callable): An optimizer.losses (dict): Dictionary to update with the loss, keyed by component.component_cfg (dict): Config parameters for specific pipelinecomponents, keyed by component name.DOCS: https://spacy.io/api/language#update

    輸出:

    {'textcat': 0.22436044702671132} {'textcat': 0.41457826484549287} {'textcat': 0.5661000985640895} {'textcat': 0.7119002992385974} {'textcat': 0.8301601885299159} {'textcat': 0.9572314705652767} {'textcat': 1.050187804254974} {'textcat': 1.1268915971417424} {'textcat': 1.2132206293363608} {'textcat': 1.3000399094508472}

    4. 預測

    預測前先要將文本nlp.tokenizer一下

    texts = ["Are you ready for the tea party????? It's gonna be wild","URGENT Reply to this message for GUARANTEED FREE TEA"] docs = [nlp.tokenizer(text) for text in texts] textcat = nlp.get_pipe('textcat') scores, _ = textcat.predict(docs) print(scores)

    輸出預測概率:

    [[9.9999392e-01 6.1252954e-06][4.1843491e-04 9.9958152e-01]]

    打印預測標簽:

    predicted_labels = scores.argmax(axis=1) print([textcat.labels[label] for label in predicted_labels]) ['ham', 'spam']

    練習:

    在上一個練習中,你為德法爾科餐廳做了一項非常出色的工作,以至于廚師為一個新項目雇傭了你。

    餐廳的菜單上有一個電子郵件地址,游客可以在那里對他們的食物進行反饋。

    經理希望你創建一個工具,自動將所有負面評價發送給他,這樣他就可以修正它們,同時自動將所有正面評價發送給餐廳老板,這樣經理就可以要求加薪了。

    您將首先使用Yelp評論構建一個模型來區分正面評論和負面評論,因為這些評論包括每個評論的評級。你的數據由每篇評論的正文和星級評分組成。

    1-2 星的評級為“負樣本”,4-5 星的評級為“正樣本”。3 星的評級是“中性”的,已經從數據中刪除。

    1. 評估方法

    • 上面方法的優勢在于,你可以區分正面郵件和負面郵件,即使你沒有標記為正面或負面的歷史郵件。
    • 這種方法的缺點是,電子郵件可能與Yelp評論很不同(不同的分布),這會降低模型的準確性。例如,客戶在電子郵件中通常會使用不同的單詞或俚語,而基于Yelp評論的模型不會看到這些單詞。
    • 如果你想知道這個問題有多嚴重,你可以比較兩個來源的詞頻。在實踐中,手動從每一個來源讀幾封電子郵件就足以判斷這是否是一個嚴重的問題。
    • 如果你想做一些更花哨的事情,你可以創建一個包含Yelp評論和電子郵件的數據集,看看模型是否能從文本內容中分辨出評論的來源。理想情況下,您希望發現該模型的性能不佳,因為這意味著您的數據源是相似的。

    2. 數據預處理、建模

    • 數據集切分
    def load_data(csv_file, split=0.9):data = pd.read_csv(csv_file)# Shuffle datatrain_data = data.sample(frac=1, random_state=7)texts = train_data.text.valueslabels = [{"POSITIVE": bool(y), "NEGATIVE": not bool(y)}for y in train_data.sentiment.values]split = int(len(train_data) * split)train_labels = [{"cats": labels} for labels in labels[:split]]val_labels = [{"cats": labels} for labels in labels[split:]]return texts[:split], train_labels, texts[split:], val_labelstrain_texts, train_labels, val_texts, val_labels = load_data('../input/nlp-course/yelp_ratings.csv')
    • 查看訓練數據
    print('Texts from training data\n------') print(train_texts[:2]) print('\nLabels from training data\n------') print(train_labels[:2])

    輸出:

    Texts from training data ------ ["Some of the best sushi I've ever had....and I come from the East Coast. Unreal toro, have some of it's available.""One of the best burgers I've ever had and very well priced. I got the tortilla burger and is was delicious especially with there tortilla soup!"]Labels from training data ------ [{'cats': {'POSITIVE': True, 'NEGATIVE': False}}, {'cats': {'POSITIVE': True, 'NEGATIVE': False}}]
    • 建模
    import spacy nlp = spacy.blank('en') # 建立空模型# Create the TextCategorizer with exclusive classes # and "bow" architecture textcat = nlp.create_pipe('textcat',config={"exclusive_classes": True, # 排他的,二分類"architecture": "bow" })# Add the TextCategorizer to the empty model nlp.add_pipe(textcat)# Add NEGATIVE and POSITIVE labels to text classifier textcat.add_label("NEGATIVE") # 負面郵件 textcat.add_label("POSITIVE") # 正面郵件

    3. 訓練

    from spacy.util import minibatch import randomdef train(model, train_data, optimizer, batch_size=8):loss = {}random.seed(1)random.shuffle(train_data)batches = minibatch(train_data, size=batch_size)for batch in batches:# train_data is a list of tuples [(text0, label0), (text1, label1), ...]# Split batch into texts and labelstexts, labels = zip(*batch)# Update model with texts and labelsmodel.update(texts, labels, sgd=optimizer, losses=loss)return loss
    • 訓練
    # Fix seed for reproducibility spacy.util.fix_random_seed(1) random.seed(1)# This may take a while to run! optimizer = nlp.begin_training() train_data = list(zip(train_texts, train_labels)) losses = train(nlp, train_data, optimizer) print(losses['textcat'])
    • 測試下效果
    text = "This tea cup was full of holes. Do not recommend." doc = nlp(text) print(doc.cats)

    輸出:

    {'NEGATIVE': 0.7731374502182007, 'POSITIVE': 0.22686253488063812}

    這杯茶不好喝,負類概率大

    4. 預測

    def predict(nlp, texts): # Use the model's tokenizer to tokenize each input textdocs = [nlp.tokenizer(text) for text in texts]# Use textcat to get the scores for each doctextcat = nlp.get_pipe('textcat')scores, _ = textcat.predict(docs)# From the scores, find the class with the highest score/probabilitypred_labels = scores.argmax(axis=1)return pred_labels

    5. 評估模型

    def evaluate(model, texts, labels):""" Returns the accuracy of a TextCategorizer model. Arguments---------model: ScaPy model with a TextCategorizertexts: Text samples, from load_data functionlabels: True labels, from load_data function"""# Get predictions from textcat model (using your predict method)predicted_class = predict(model, texts)# From labels, get the true class as a list of integers (POSITIVE -> 1, NEGATIVE -> 0)true_class = [int(label['cats']['POSITIVE']) for label in labels]# A boolean or int array indicating correct predictionscorrect_predictions = (true_class == predicted_class)# The accuracy, number of correct predictions divided by all predictionsaccuracy = sum(correct_predictions)/len(true_class)return accuracy accuracy = evaluate(nlp, val_texts, val_labels) print(f"Accuracy: {accuracy:.4f}")

    輸出:驗證集準確率 92.39%

    Accuracy: 0.9239
    • 多次迭代訓練
    # This may take a while to run! n_iters = 5 for i in range(n_iters):losses = train(nlp, train_data, optimizer)accuracy = evaluate(nlp, val_texts, val_labels)print(f"Loss: {losses['textcat']:.3f} \t Accuracy: {accuracy:.3f}") Loss: 6.752 Accuracy: 0.940 Loss: 4.105 Accuracy: 0.947 Loss: 2.904 Accuracy: 0.945 Loss: 2.267 Accuracy: 0.946 Loss: 1.826 Accuracy: 0.944

    6. 改進

    這里有各種超參數可以調節。最重要的超參數是TextCategorizer 的 architecture

    上面使用的最簡單的模型,它訓練得快,但可能比 CNN 和 ensemble 模型的性能差


    我的CSDN博客地址 https://michael.blog.csdn.net/

    長按或掃碼關注我的公眾號(Michael阿明),一起加油、一起學習進步!

    創作挑戰賽新人創作獎勵來咯,堅持創作打卡瓜分現金大獎

    總結

    以上是生活随笔為你收集整理的【Kaggle微课程】Natural Language Processing - 2.Text Classification的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。