日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

文本分类(一)EWECT微博情绪分类大赛第三名Bert-Last_3embedding_concat最优单模型复现

發(fā)布時(shí)間:2025/3/8 编程问答 21 豆豆
生活随笔 收集整理的這篇文章主要介紹了 文本分类(一)EWECT微博情绪分类大赛第三名Bert-Last_3embedding_concat最优单模型复现 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

tensorflow2.0 + transformers EWECT微博情緒分類大賽第三名Bert-Last_3embedding_concat最優(yōu)單模型復(fù)現(xiàn)

  • 前言
  • 代碼部分
    • 訓(xùn)練結(jié)果
  • 總結(jié)
    • 迭代優(yōu)化


前言

最近正在實(shí)現(xiàn)網(wǎng)易云評(píng)論情緒分類,用于評(píng)論社區(qū)研究,在搜索相關(guān)比賽的實(shí)現(xiàn)方法,看到了為數(shù)不多的單模型也能達(dá)到較好效果的情況,因此拿來(lái)復(fù)現(xiàn)作為第一版模型。
復(fù)現(xiàn)模型:微博情緒分析評(píng)測(cè)(smp2020-ewect)No.3 拿第一導(dǎo)師請(qǐng)吃肯德基 usual語(yǔ)料部分情緒分類最優(yōu)單模型。
模型結(jié)構(gòu):


代碼部分

tensorflow : 2.0
transformers : 3.1.0

話不多說(shuō)直接上代碼

import numpy as np import pandas as pd import tensorflow as tf from sklearn import model_selection from transformers import * from tokenizers import BertWordPieceTokenizer, ByteLevelBPETokenizer from sklearn.metrics import f1_score from sklearn.model_selection import train_test_split import os import tensorflow.keras.backend as Kclass BertNerModel(TFBertPreTrainedModel):dense_layer = 512class_num = 6drop_out_rate = 0.5def __init__(self, config, *inputs, **kwargs):super(BertNerModel,self).__init__(config, *inputs, **kwargs)config.output_hidden_states = Trueself.bert_layer = TFBertMainLayer(config, name='bert')self.bert_layer.trainable = Trueself.liner_layer = tf.keras.layers.Dense(self.dense_layer,activation='relu')self.soft_max = tf.keras.layers.Dense(self.class_num,activation='softmax')self.drop_out = tf.keras.layers.Dropout(self.drop_out_rate)def call(self, inputs):hidden_states = self.bert_layer(inputs)tensor = tf.concat((hidden_states[2][-1][:,0],hidden_states[2][-2][:,0],hidden_states[2][-3][:,0],hidden_states[1]),1,)drop_out_l = self.drop_out(tensor)Dense_l = self.liner_layer(drop_out_l)outputs = self.soft_max(Dense_l)return outputsdef encode_(x,y,tokenizer):train_texts, val_texts, train_tags, val_tags = train_test_split(x,y,test_size=0.2,random_state=888)batch_x1 = tokenizer(train_texts, padding=True, truncation=True, return_tensors="tf",max_length=60)batch_x2 = tokenizer(val_texts, padding=True, truncation=True, return_tensors="tf",max_length=60)label_1 = tf.constant(train_tags)label_2 = tf.constant(val_tags)dataset_train = tf.data.Dataset.from_tensor_slices((dict(batch_x1),label_1))dataset_test = tf.data.Dataset.from_tensor_slices((dict(batch_x2),label_2))return dataset_train,dataset_testclass Metrics(tf.keras.callbacks.Callback):'''定義了一個(gè)callbcacks方法,可以在每輪訓(xùn)練結(jié)束后計(jì)算模型在測(cè)試集上的F1_score值,并保存f1_score最大的模型。(選用)'''def __init__(self, valid_data):super(Metrics, self).__init__()self.validation_data = valid_datadef on_train_begin(self, logs=None):self.val_f1s = []self.best_val_f1 = 0def on_epoch_end(self, epoch, logs=None):logs = logs or {}val_predict = np.argmax(self.model(self.validation_data[0]), -1)val_targ = self.validation_data[1]_val_f1 = f1_score(val_targ, val_predict, average='macro')self.val_f1s.append(_val_f1)logs['val_f1'] = _val_f1if _val_f1 > self.best_val_f1:self.model.save_pretrained('./checkpoints/weights-f1={}/'.format(_val_f1), overwrite=True)self.best_val_f1 = _val_f1print("best f1: {}".format(self.best_val_f1))else:print("val f1: {}, but not the best f1".format(_val_f1))returndef focal_loss(label,pred,class_num=6, gamma=2):'''多分類的focal_loss,暫時(shí)沒(méi)有提升效果(選用)'''label = tf.squeeze(tf.cast(tf.one_hot(tf.cast(label,tf.int32),class_num),pred.dtype)) pred = tf.clip_by_value(pred, 1e-8, 1.0)w1 = tf.math.pow((1.0-pred),gamma)L = - tf.math.reduce_sum(w1 * label * tf.math.log(pred))return Ldef sparse_categorical_crossentropy(y_true, y_pred):y_true = tf.reshape(y_true, tf.shape(y_pred)[:-1])y_true = tf.cast(y_true, tf.int32)y_true = tf.one_hot(y_true, K.shape(y_pred)[-1])return tf.keras.losses.categorical_crossentropy(y_true, y_pred)def loss_with_gradient_penalty(model,epsilon=1):'''對(duì)抗訓(xùn)練FGM懲罰梯度損失函數(shù),能提升1個(gè)點(diǎn)左右(選用)'''def loss_with_gradient_penalty_2(y_true, y_pred):loss = tf.math.reduce_mean(sparse_categorical_crossentropy(y_true, y_pred))embeddings = model.variables[0]gp = tf.math.reduce_sum(tf.gradients(loss, [embeddings])[0].values**2)return loss + 0.5 * epsilon * gpreturn loss_with_gradient_penalty_2 def main():if not os.path.exists('./checkpoints'):os.makedirs('./checkpoints') tb_callback = tf.keras.callbacks.TensorBoard(log_dir='./logs', profile_batch=0)#預(yù)訓(xùn)練模型加載部分pretrained_path = 'model/'config_path = os.path.join(pretrained_path, 'bert_config.json')vocab_path = os.path.join(pretrained_path, 'vocab.txt')# 加載configconfig = BertConfig.from_json_file(config_path)tokenizer = BertTokenizer.from_pretrained(vocab_path)bert_ner_model = BertNerModel.from_pretrained(pretrained_path,config=config,from_pt=True)#數(shù)據(jù)處理部分data = pd.read_csv('data_proceed.csv')data = data.dropna()emotion_g2id = {}for i,j in enumerate(set(data['情緒標(biāo)簽'])):emotion_g2id[j]=idata['情緒標(biāo)簽'] = data['情緒標(biāo)簽'].apply(lambda x:emotion_g2id[x])data_size = len(data['情緒標(biāo)簽'])train_size = data_size*0.8train_test = data_size*0.2steps_per_epoch = train_size//16validation_step = train_test//16dataset_train,dataset_test = encode_(list(data['文本']),list(data['情緒標(biāo)簽']),tokenizer)dataset_train = dataset_train.shuffle(999).repeat().batch(16)dataset_test = dataset_test.batch(16)#模型訓(xùn)練部分optimizer = tf.keras.optimizers.Adam(learning_rate=2e-5)bert_ner_model.compile(optimizer=optimizer, loss=[loss_with_gradient_penalty(bert_ner_model,0.5)] ,metrics=['sparse_categorical_accuracy'])#此處可以使用tf自帶的 sparse_categorical_crossentropybert_ner_model.fit(dataset_train,epochs=5,verbose=1,steps_per_epoch=steps_per_epoch,validation_data=dataset_test,validation_steps=validation_step) # callbacks=[Metrics(valid_data=(batch_x2,label_2)),tb_callback])bert_ner_model.save_pretrained('./my_mrpc_model/') if __name__ == '__main__':main()

訓(xùn)練結(jié)果


總結(jié)

結(jié)果與比賽的結(jié)果還有一些差距,因?yàn)樵跀?shù)據(jù)預(yù)處理和模型調(diào)優(yōu)上沒(méi)有做太多嘗試,這里只在3個(gè)hidden_layers后接了一個(gè)dense直接跟了softmax,可以考慮自己加一些lstm等嘗試一下。


迭代優(yōu)化

之后會(huì)用這個(gè)模型去對(duì)網(wǎng)易云音樂(lè)的評(píng)論進(jìn)行推理,選取置信度較高的樣本加入到原訓(xùn)練樣本中一起學(xué)習(xí),有點(diǎn)自學(xué)習(xí)那味了。

總結(jié)

以上是生活随笔為你收集整理的文本分类(一)EWECT微博情绪分类大赛第三名Bert-Last_3embedding_concat最优单模型复现的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。