日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

深入Bert实战(Pytorch)----fine-Tuning 2

發(fā)布時間:2025/3/21 编程问答 50 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深入Bert实战(Pytorch)----fine-Tuning 2 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

?

深入Bert實戰(zhàn)(Pytorch)----fine-Tuning 2

https://www.bilibili.com/video/BV1K5411t7MD?p=5
https://www.youtube.com/channel/UCoRX98PLOsaN8PtekB9kWrw/videos
深入BERT實戰(zhàn)(PyTorch) by ChrisMcCormickAI
這是ChrisMcCormickAI在油管bert,8集系列第三篇fine-Tuning的pytorch的講解的代碼,在油管視頻下有cloab地址,如果不能翻墻的可以留下郵箱我全部看完整理后發(fā)給你。但是在fine-tuning最好還是在cloab上運行

?

文章目錄

  • 深入Bert實戰(zhàn)(Pytorch)----fine-Tuning 2
  • 4. Train Our Classification Model
    • 4.1. BertForSequenceClassification
    • 4.2. Optimizer & Learning Rate Scheduler
    • 4.3. 循環(huán)訓練
  • 5. 在測試集上的表現(xiàn)
    • 5.1. 數據準備
    • 5.2. 在測試集上評估
  • 總結
  • 附錄
    • A1. Saving & Loading Fine-Tuned Model
  • Revision History

?


4. Train Our Classification Model

4.1. BertForSequenceClassification

對于這個任務,我們首先要修改預訓練的BERT模型以給出分類輸出,然后在自己的數據集上繼續(xù)訓練模型,直到整個模型(端到端的模型)非常適合自己的任務。

值得慶幸的是,huggingface pytorch實現(xiàn)包含一組為各種NLP任務設計的接口。盡管這些接口都建立在訓練好的BERT模型之上,但每個接口都有不同的頂層和輸出類型,以適應它們特定的NLP任務。

這里是目前提供的fine-tuning列表

  • BertModel
  • BertForPreTraining
  • BertForMaskedLM
  • BertForNextSentencePrediction
  • BertForSequenceClassification?- The one we’ll use.
  • BertForTokenClassification
  • BertForQuestionAnswering

這里是transformer的文檔here.

我們使用BertForSequenceClassification。這是普通的BERT模型,上面添加了一個用于分類的線性層,我們將使用它作為句子分類器。當我們輸入數據時,整個預訓練的BERT模型和額外的未訓練的分類層是同時在這個任務上進行訓練

好的,現(xiàn)在加載BERT!這里有幾種不同的預訓練模型,"bert-base-uncased"版本,僅有小寫字母(“uncased”)相比于是較小的(“base” vs “l(fā)arge”)。

預訓練的文檔在from_pretrainedhere?定義了其它參數?here

from transformers import BertForSequenceClassification, AdamW, BertConfig# Load BertForSequenceClassification, the pretrained BERT model with a single # linear classification layer on top. # 加載BertForSequenceClassification,預訓練的模型+頂層單層線性分類層 model = BertForSequenceClassification.from_pretrained("bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab.num_labels = 2, # The number of output labels--2 for binary classification.# You can increase this for multi-class tasks.# 2分類問題,可以增加為多分類問題output_attentions = False, # Whether the model returns attentions weights.output_hidden_states = False, # Whether the model returns all hidden-states. )# Tell pytorch to run this model on the GPU. model.cuda()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

出于好奇,我們可以在這里按名稱瀏覽所有的模型參數。
在下面的單元格中,我打印出了以下權重的名稱和尺寸:

這里作者打印了所有層,總共有201層,也打印了權重和大小

  • The embedding layer.
  • The first of the twelve transformers.
  • The output layer.
  • # Get all of the model's parameters as a list of tuples. params = list(model.named_parameters())print('The BERT model has {:} different named parameters.\n'.format(len(params)))print('==== Embedding Layer ====\n')for p in params[0:5]:print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))print('\n==== First Transformer ====\n')for p in params[5:21]:print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))print('\n==== Output Layer ====\n')for p in params[-4:]:print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    The BERT model has 201 different named parameters.==== Embedding Layer ====bert.embeddings.word_embeddings.weight (30522, 768) bert.embeddings.position_embeddings.weight (512, 768) bert.embeddings.token_type_embeddings.weight (2, 768) bert.embeddings.LayerNorm.weight (768,) bert.embeddings.LayerNorm.bias (768,)==== First Transformer ====bert.encoder.layer.0.attention.self.query.weight (768, 768) bert.encoder.layer.0.attention.self.query.bias (768,) bert.encoder.layer.0.attention.self.key.weight (768, 768) bert.encoder.layer.0.attention.self.key.bias (768,) bert.encoder.layer.0.attention.self.value.weight (768, 768) bert.encoder.layer.0.attention.self.value.bias (768,) bert.encoder.layer.0.attention.output.dense.weight (768, 768) bert.encoder.layer.0.attention.output.dense.bias (768,) bert.encoder.layer.0.attention.output.LayerNorm.weight (768,) bert.encoder.layer.0.attention.output.LayerNorm.bias (768,) bert.encoder.layer.0.intermediate.dense.weight (3072, 768) bert.encoder.layer.0.intermediate.dense.bias (3072,) bert.encoder.layer.0.output.dense.weight (768, 3072) bert.encoder.layer.0.output.dense.bias (768,) bert.encoder.layer.0.output.LayerNorm.weight (768,) bert.encoder.layer.0.output.LayerNorm.bias (768,)==== Output Layer ====bert.pooler.dense.weight (768, 768) bert.pooler.dense.bias (768,) classifier.weight (2, 768) classifier.bias (2,)
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35

    4.2. Optimizer & Learning Rate Scheduler

    現(xiàn)在我們已經加載了模型,我們需要從存儲的模型中獲取訓練超參數。

    為了進行微調,作者建議從以下值中進行選擇。(從論文的注釋?BERT paper):

    • Batch size:?16, 32
    • Learning rate (Adam):?5e-5, 3e-5, 2e-5
    • Number of epochs:?2, 3, 4

    作者選擇的參數是:

    • Batch size: 32 (set when creating our DataLoaders)
    • Learning rate: 2e-5
    • Epochs: 4 (we’ll see that this is probably too many…)

    參數eps = 1e-8?是"a very small number to prevent any division by zero in the implementation"(from?here)

    您可以在run_glue.py中找到創(chuàng)建AdamW優(yōu)化器的方法here.

    # Note: AdamW is a class from the huggingface library (as opposed to pytorch) # AdamW是huggingface實現(xiàn)的類 # I believe the 'W' stands for 'Weight Decay fix" optimizer = AdamW(model.parameters(),lr = 2e-5, # args.learning_rate - default is 5e-5, our notebook had 2e-5eps = 1e-8 # args.adam_epsilon - default is 1e-8.)
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    from transformers import get_linear_schedule_with_warmup# Number of training epochs. The BERT authors recommend between 2 and 4. # We chose to run for 4, but we'll see later that this may be over-fitting the # training data. epochs = 4# Total number of training steps is [number of batches] x [number of epochs]. # (Note that this is not the same as the number of training samples). total_steps = len(train_dataloader) * epochs # 總共4 * 241批# Create the learning rate scheduler. scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, # Default value in run_glue.pynum_training_steps = total_steps)
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    4.3. 循環(huán)訓練

    下面是我們的訓練循環(huán)。有很多事情要做,但從根本上來說,對于循環(huán)中的每一個過程,我們都有一個training階段和一個validation階段。

    Thank you to?Stas Bekman?for contributing the insights and code for using validation loss to detect over-fitting!

    Training:

    • 打開我們的數據 inputs 和 labels
    • 加載數據到GPU上
    • 清除之前計算的梯度。
      • 在pytorch中,除非顯式清除梯度,否則梯度默認累積(對于rnn之類的東西很有用)。
    • Forward pass(通過網絡輸入數據)
    • Backward pass 反向傳播
    • 告訴網絡使用optimizer.step()更新參數
    • 監(jiān)控進度,跟蹤變量

    Evalution:

    • 同訓練過程一樣,打開inputs 和 labels
    • 加載數據到GPU上
    • Forward pass(通過網絡輸入數據)
    • 計算我們驗證數據的損失,監(jiān)控進度,跟蹤變量

    Pytorch向我們隱藏了所有詳細的計算,但是我們已經對代碼進行了注釋,指出了每一行上發(fā)生的上述步驟。

    定義一個計算精度的輔助函數。

    import numpy as np# Function to calculate the accuracy of our predictions vs labels # 這個函數來計算預測值和labels的準確度 def flat_accuracy(preds, labels):pred_flat = np.argmax(preds, axis=1).flatten() # 取出最大值對應的索引labels_flat = labels.flatten()return np.sum(pred_flat == labels_flat) / len(labels_flat)
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8

    格式化函數時間

    import time import datetimedef format_time(elapsed):'''Takes a time in seconds and returns a string hh:mm:ss'''# Round to the nearest second. 四舍五入elapsed_rounded = int(round((elapsed)))# Format as hh:mm:ssreturn str(datetime.timedelta(seconds=elapsed_rounded))
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    現(xiàn)在開始訓練,這里要修改一部分代碼,作者給的代碼有個地方要做修改,參考run_glue.py

    import random import numpy as np# This training code is based on the `run_glue.py` script here: # https://github.com/huggingface/transformers/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L128# Set the seed value all over the place to make this reproducible. 保證可重復性 seed_val = 42random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val)# We'll store a number of quantities(保存如) such as training and validation loss, # validation accuracy, and timings.(訓練loss, 驗證loss, 驗證準確率,訓練時間) training_stats = []# Measure the total training time for the whole run. 總訓練時間 total_t0 = time.time()# For each epoch... for epoch_i in range(0, epochs):# ========================================# Training# ========================================# 對訓練集進行一次完整的測試。print("")print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))print('Training...')# Measure how long the training epoch takes.t0 = time.time()# Reset the total loss for this epoch.total_train_loss = 0# Put the model into training mode. Don't be mislead--the call to # `train` just changes the *mode*, it doesn't *perform* the training.# 這里并不是執(zhí)行的訓練,而是,實例化啟用 BatchNormalization 和 Dropout# `dropout` and `batchnorm` layers behave differently during training# vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch)model.train()# For each batch of training data...for step, batch in enumerate(train_dataloader): # 共241個batches# Progress update every 40 batches. 40步打印一次if step % 40 == 0 and not step == 0:# Calculate elapsed time in minutes.elapsed = format_time(time.time() - t0)# Report progress.print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))# 例: Batch 40 of 241. Elapsed: 0:00:08.# `batch` contains three pytorch tensors:# [0]: input ids # [1]: attention masks# [2]: labels # 第一步的打開數據, 第二步 將數據放到GPU `to`方法b_input_ids = batch[0].to(device)b_input_mask = batch[1].to(device)b_labels = batch[2].to(device)# 在執(zhí)行 backward pass 之前,始終清除任何先前計算的梯度。# PyTorch不會自動這樣做,因為累積梯度“在訓練rnn時很方便”。# (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch)model.zero_grad() # 第三步,梯度清零 # 執(zhí)行 forward pass (在此訓練批次上對模型進行評估).# The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification# 它根據給定的參數和設置的標志返回不同數量的形參。# it returns the loss (because we provided labels) and the "logits"--the model outputs prior to activation.# 返回loss和"logits"--激活之前的模型輸出。 model = BertForSequenceClassificationoutput = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)# 將所有批次的訓練損失累積起來,這樣我們就可以在最后計算平均損失。 # `loss` 是一個單個值的tensor; the `.item()` 函數將它轉為一個python numberloss, logits = output[:2]total_train_loss += loss.item()# 執(zhí)行反向傳播計算精度.loss.backward()# Clip the norm of the gradients to 1.0.# 梯度裁剪,防止梯度爆炸torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)# Update parameters and take a step using the computed gradient.# 更新參數,計算梯度# 優(yōu)化器規(guī)定“update rule”——參數如何根據梯度、學習速率等進行修改。optimizer.step()# 更新學習率scheduler.step()# 計算平均lossavg_train_loss = total_train_loss / len(train_dataloader) # 訓練時間training_time = format_time(time.time() - t0)# 打印結果print("")print(" Average training loss: {0:.2f}".format(avg_train_loss))print(" Training epcoh took: {:}".format(training_time))# ========================================# Validation# ========================================# 在驗證集查看print("")print("Running Validation...")t0 = time.time()# 將模型置于評估模式 不使用BatchNormalization()和Dropout()model.eval()# 跟蹤變量total_eval_accuracy = 0total_eval_loss = 0nb_eval_steps = 0# 在每個epoch上評估for batch in validation_dataloader:# `batch` contains three pytorch tensors:# [0]: input ids # [1]: attention masks# [2]: labels b_input_ids = batch[0].to(device)b_input_mask = batch[1].to(device)b_labels = batch[2].to(device)# Tell pytorch not to bother with constructing the compute graph during# the forward pass, since this is only needed for backprop (training).with torch.no_grad(): # Forward pass, calculate logit predictions.# token_type_ids is the same as the "segment ids", which # differentiates sentence 1 and 2 in 2-sentence tasks.# The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification# Get the "logits" output by the model. The "logits" are the output# values prior to applying an activation function like the softmax.(loss, logits) = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask,labels=b_labels)# 計算驗證損失loss, logits = output[:2]total_eval_loss += loss.item()# Move logits and labels to CPUlogits = logits.detach().cpu().numpy()label_ids = b_labels.to('cpu').numpy()# Calculate the accuracy for this batch of test sentences, and# accumulate it over all batches.total_eval_accuracy += flat_accuracy(logits, label_ids)# 返回驗證結果avg_val_accuracy = total_eval_accuracy / len(validation_dataloader)print(" Accuracy: {0:.2f}".format(avg_val_accuracy))# 計算平均復雜度avg_val_loss = total_eval_loss / len(validation_dataloader)# 時間validation_time = format_time(time.time() - t0)print(" Validation Loss: {0:.2f}".format(avg_val_loss))print(" Validation took: {:}".format(validation_time))# 記錄這個epoch的所有統(tǒng)計數據。 方便后面可視化training_stats.append({'epoch': epoch_i + 1,'Training Loss': avg_train_loss,'Valid. Loss': avg_val_loss,'Valid. Accur.': avg_val_accuracy,'Training Time': training_time,'Validation Time': validation_time})print("") print("Training complete!")print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0)))
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200

    讓我們來看看訓練過程的總結。

    import pandas as pd# 顯示浮點數小數點后兩位。 pd.set_option('precision', 2)# 從訓練統(tǒng)計數據里,創(chuàng)建一個 DataFrame df_stats = pd.DataFrame(data=training_stats)# 用'epoch'行坐標 df_stats = df_stats.set_index('epoch')# A hack to force the column headers to wrap. #df = df.style.set_table_styles([dict(selector="th",props=[('max-width', '70px')])])# Display the table. df_stats
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    Training LossValid. LossValid. Accur.Training TimeValidation Time epoch
    10.500.450.800:00:51
    20.320.460.810:00:51
    30.220.490.820:00:51
    40.160.550.820:00:51

    這里我跑這代碼train loss沒有下降,反而上升了,有了解這個問題的大大,麻煩請留言指教下

    請注意,當訓練損失隨著時間的推移而下降時,驗證損失卻在增加!這表明我們訓練模型的時間太長了,它對訓練數據過于擬合。

    (作為參考,我們使用了7,695個訓練樣本和856個驗證樣本)。

    驗證損失是比精度更精確的度量,因為有了精度,我們不關心確切的輸出值,而只關心它落在閾值的哪一邊。

    如果我們預測的是正確的答案,但缺乏信心,那么驗證損失將捕捉到這一點,而準確性則不會。

    import matplotlib.pyplot as plt % matplotlib inlineimport seaborn as sns# Use plot styling from seaborn. sns.set(style='darkgrid')# Increase the plot size and font size. sns.set(font_scale=1.5) plt.rcParams["figure.figsize"] = (12,6)# 繪制學習曲線 plt.plot(df_stats['Training Loss'], 'b-o', label="Training") plt.plot(df_stats['Valid. Loss'], 'g-o', label="Validation")# Label the plot. plt.title("Training & Validation Loss") plt.xlabel("Epoch") plt.ylabel("Loss") plt.legend() plt.xticks([1, 2, 3, 4])plt.show()
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24

    5. 在測試集上的表現(xiàn)

    現(xiàn)在,我們將加載holdout數據集并準備輸入,就像我們對訓練集所做的那樣。然后,我們將使用Matthew’s correlation coefficient評估預測,因為這是更廣泛的NLP社區(qū)用于評估CoLA性能的指標。在這個指標下,+1是最好的分數,-1是最差的分數。通過這種方式,我們可以看到針對這個特定任務的先進模型的性能如何。

    5.1. 數據準備

    我們需要應用與訓練數據相同的所有步驟來準備測試數據集。

    import pandas as pd# 加載數據 df = pd.read_csv("./cola_public/raw/out_of_domain_dev.tsv", delimiter='\t', header=None, names=['sentence_source', 'label', 'label_notes', 'sentence'])# 顯示句子數量 print('Number of test sentences: {:,}\n'.format(df.shape[0]))# 創(chuàng)建句子和標簽列表 sentences = df.sentence.values labels = df.label.values# Tokenize input_ids = [] attention_masks = []# For every sentence... for sent in sentences:# `encode_plus` will:# (1) Tokenize the sentence.# (2) 添加 `[CLS]` token 到開始# (3) 添加 `[SEP]` token 到結束# (4) 映射tokens 到 IDs.# (5) 填充或截斷句子到`max_length`# (6) Create attention masks for [PAD] tokens.encoded_dict = tokenizer.encode_plus(sent, # 對句子做encode.add_special_tokens = True, # Add '[CLS]' and '[SEP]'max_length = 64, # Pad & truncate all sentences.pad_to_max_length = True,return_attention_mask = True, # Construct attn. masks.return_tensors = 'pt', # Return pytorch tensors.)# 將已編碼的句子添加到列表中。 input_ids.append(encoded_dict['input_ids'])# 以及它的注意力掩碼(簡單地區(qū)分填充和非填充)。attention_masks.append(encoded_dict['attention_mask'])# Convert the lists into tensors. input_ids = torch.cat(input_ids, dim=0) attention_masks = torch.cat(attention_masks, dim=0) labels = torch.tensor(labels)# Set the batch size. batch_size = 32 # Create the DataLoader. prediction_data = TensorDataset(input_ids, attention_masks, labels) prediction_sampler = SequentialSampler(prediction_data) prediction_dataloader = DataLoader(prediction_data, sampler=prediction_sampler, batch_size=batch_size)
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52

    Number of test sentences: 516

    5.2. 在測試集上評估

    準備好測試集之后,我們可以應用我們的微調模型來生成測試集的預測。

    # Prediction on test setprint('Predicting labels for {:,} test sentences...'.format(len(input_ids)))# 在測試模型 model.eval()# 跟蹤變量 predictions , true_labels = [], []# Predict for batch in prediction_dataloader:# Add batch to GPUbatch = tuple(t.to(device) for t in batch)# Unpack the inputs from our dataloaderb_input_ids, b_input_mask, b_labels = batch# 不讓模型計算或存儲梯度,節(jié)省內存和加速預測with torch.no_grad():# Forward pass, calculate logit predictionsoutputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)logits = outputs[0]# Move logits and labels to CPUlogits = logits.detach().cpu().numpy()label_ids = b_labels.to('cpu').numpy()# Store predictions and true labelspredictions.append(logits)true_labels.append(label_ids)print(' DONE.')
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35

    CoLA基準的精度是用“Matthews correlation coefficient”來測量的。(MCC)。

    我們在這里使用MCC是因為類是不平衡的:

    print('Positive samples: %d of %d (%.2f%%)' % (df.label.sum(), len(df.label), (df.label.sum() / len(df.label) * 100.0)))
    • 1

    Positive samples: 354 of 516 (68.60%)

    # 計算相關系數 from sklearn.metrics import matthews_corrcoefmatthews_set = []# 使用Matthew相關系數對每個測試批進行評估 print('Calculating Matthews Corr. Coef. for each batch...')# For each input batch... for i in range(len(true_labels)):# 這個批處理的預測是一個2列的ndarray(一個列是“0”,一個列是“1”)。 # 選擇值最高的label,并將其轉換為0和1的列表。pred_labels_i = np.argmax(predictions[i], axis=1).flatten()# Calculate and store the coef for this batch. matthews = matthews_corrcoef(true_labels[i], pred_labels_i) matthews_set.append(matthews)
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18

    最終的分數將基于整個測試集,但是讓我們看一下單個批次的分數,以了解批次之間度量的可變性。

    每批有32個句子,除了最后一批只有(516% 32)= 4個測試句子。

    創(chuàng)建一個柱狀圖,顯示每批測試樣品的MCC分數。

    ax = sns.barplot(x=list(range(len(matthews_set))), y=matthews_set, ci=None)plt.title('MCC Score per Batch') plt.ylabel('MCC Score (-1 to +1)') plt.xlabel('Batch #')plt.show()
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7


    現(xiàn)在我們將合并所有批次的結果并計算我們最終的MCC分數。

    # 合并所有批次的結果。 flat_predictions = np.concatenate(predictions, axis=0)# 對于每個樣本,選擇得分較高的標簽(0或1)。 flat_predictions = np.argmax(flat_predictions, axis=1).flatten()# 將每個批次的正確標簽組合成一個單獨的列表。 flat_true_labels = np.concatenate(true_labels, axis=0)# Calculate the MCC mcc = matthews_corrcoef(flat_true_labels, flat_predictions)print('Total MCC: %.3f' % mcc)
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    在大約半個小時的時間里,我們沒有做任何超參數的調整(learning rate, epochs, batch size, ADAM properties屬性等),我們就獲得了一個很好的分數。

    為了使分數最大化,我們應該刪除“驗證集”(我們用來幫助確定要訓練多少個紀元),并在整個訓練集上訓練。

    庫將基準測試此處的預期精度文檔為“49.23”。

    官方排行?here.

    請注意(由于數據集的大小較小?)在不同的運行中,精度可能會有很大的變化。

    總結

    這篇文章演示了使用預先訓練好的BERT模型,不管你感興趣的是什么特定的NLP任務,你都可以使用pytorch接口,用最少的努力和訓練時間,快速有效地創(chuàng)建一個高質量的模型。

    附錄

    A1. Saving & Loading Fine-Tuned Model

    (取自’ run_glue。py 'here)將模型和標記器寫入磁盤。

    import os# 保存best-practices:如果您使用模型的默認名稱,您可以使用from_pretraining()重新加載它 # Saving best-practices: if you use defaults names for the model, you can reload it using from_pretrained()output_dir = './model_save/'# 如果需要,創(chuàng)建輸出目錄 if not os.path.exists(output_dir):os.makedirs(output_dir)print("Saving model to %s" % output_dir)# 使用`save_pretrained()`保存訓練過的模型、配置和標記器。 # 用`from_pretrained()`重新加載模型。 model_to_save = model.module if hasattr(model, 'module') else model # 注意distributed/parallel training model_to_save.save_pretrained(output_dir) tokenizer.save_pretrained(output_dir)# Good practice: 保存訓練好的模型于模型參數 # torch.save(args, os.path.join(output_dir, 'training_args.bin'))
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22

    Revision History

    Version 3?-?Mar 18th, 2020?- (current)

    • Simplified the tokenization and input formatting (for both training and test) by leveraging the?tokenizer.encode_plus?function.
      encode_plus?handles padding?and?creates the attention masks for us.
    • Improved explanation of attention masks.
    • Switched to using?torch.utils.data.random_split?for creating the training-validation split.
    • Added a summary table of the training statistics (validation loss, time per epoch, etc.).
    • Added validation loss to the learning curve plot, so we can see if we’re overfitting.
      • Thank you to?Stas Bekman?for contributing this!
    • Displayed the per-batch MCC as a bar plot.

    Version 2?-?Dec 20th, 2019?-?link

    • huggingface renamed their library to?transformers.
    • Updated the notebook to use the?transformers?library.

    Version 1?-?July 22nd, 2019

    • Initial version.

    總結

    以上是生活随笔為你收集整理的深入Bert实战(Pytorch)----fine-Tuning 2的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。