日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

2022芒果TV算法赛_用户下一个观看视频预测_baseline_CF召回YoutubeDNN

發布時間:2023/12/18 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 2022芒果TV算法赛_用户下一个观看视频预测_baseline_CF召回YoutubeDNN 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

用戶下一個觀看視頻預測_baseline_CF

第三屆“馬欄山杯”國際音視頻算法大賽

1. 賽題介紹

設法提高用戶觀看體驗,是芒果TV平臺的核心技術挑戰之一,我們不斷為此而努力!及時發現用戶的興趣和調整內容展示對于實現這一目標非常重要,在給定用戶觀影歷史和上下文的行為的條件下,進行序列預測是一項非常重要且困難的推薦任務,本賽題將以此為背景,希望選手在真實樣本數據集下建立出最優的序列預測模型。

2. 賽題任務

根據大賽組織方提供的用戶在芒果TV產生的行為,視頻特征,標簽特征等數據,構建模型,預測用戶在芒果TV下一個時刻觀看的視頻。

3. baseline_CF

import os import pandas as pd import time from datetime import datetime from tqdm import tqdmfrom collections import defaultdict import math,pickle import numpy as npdata_dir = '/Desktop/比賽/2022用戶下一個觀看視頻預測' seq = pd.read_csv(os.path.join(data_dir, 'dataset/main_vv_seq_train.csv'))candidates_items = pd.read_csv(os.path.join(data_dir, 'dataset/candidate_items_A.csv')) candidates_set = set(candidates_items.vid) def get_sim_item(df, user_col, item_col, use_iif=False): user_item_ = df.groupby(user_col)[item_col].agg(set).reset_index() user_item_dict = dict(zip(user_item_[user_col], user_item_[item_col])) item_user_ = df.groupby(item_col)[user_col].agg(set).reset_index() item_user_dict = dict(zip(item_user_[item_col], item_user_[user_col])) sim_item_corr = {}for item, users in tqdm(item_user_dict.items()):sim_item_corr.setdefault(item, {}) for u in users:tmp_len = len(user_item_dict[u])for relate_item in user_item_dict[u]:sim_item_corr[item].setdefault(relate_item, 0)sim_item_corr[item][relate_item] += 1/ (math.log(len(users)+1) * math.log(tmp_len+1))return sim_item_corr sim_item_corr = get_sim_item(train_sessions, 'did','vid',use_iif=True) def recommend(sim_item_corr, popular_items, top_k, session_item_list, item_num=100): rank = {} for i in session_item_list: if i not in sim_item_corr.keys():continuefor j, wij in sorted(sim_item_corr[i].items(), key=lambda d: d[1], reverse=True)[0:item_num]: if j not in candidates_set:continueif j not in session_item_list: rank.setdefault(j, 0) rank[j] += wijif len(rank) > 0:rank = sorted(rank.items(), key=lambda d: d[1], reverse=True)[:top_k]rank = np.array(rank)# item_list = list(rank[:,0].astype('int32'))item_list = list(rank[:,0])score_list = rank[:,1]else:item_list = []score_list = []return item_list, score_list top_k = 6 train_session_dict = seq.groupby('did')['vid'].agg(list).to_dict() session_id_list = [] item_id_list = [] rank_list = [] for session_id,session_item_list in tqdm(train_session_dict.items()):item_list, score_list = recommend(sim_item_corr,popular_items,top_k,session_item_list)session_id_list += [session_id for _ in range(len(item_list))]item_id_list += list(item_list)rank_list += [x for x in range(1,len(item_list)+1)] res_df = pd.DataFrame() res_df['did'] = session_id_list res_df['vid'] = item_id_list res_df['rank'] = rank_listres_df.to_csv(data_dir + '/results/baseline_CF_0627.csv',index=False)

線上得分0.1672。這個比賽數據集相對于recsys2022那個來說,可操作空間大一些。

4. YoutubeDNN召回

4.1 數據處理

def gen_data_set(data, negsample=5):item_ids = data['vid'].unique()train_set = []test_set = []for reviewerID, hist in tqdm(data.groupby('did')):pos_list = hist['vid'].tolist()if negsample > 0:candidate_set = list(set(item_ids) - set(pos_list)) neg_list = np.random.choice(candidate_set,size=len(pos_list)*negsample,replace=True) # 對于每個正樣本,選擇n個負樣本if len(pos_list) == 1:train_set.append((reviewerID, [pos_list[0]], pos_list[0],1,len(pos_list)))test_set.append((reviewerID, [pos_list[0]], pos_list[0],1,len(pos_list)))# 滑窗構造正負樣本for i in range(1, len(pos_list)):hist = pos_list[:i]if i != len(pos_list) - 1:train_set.append((reviewerID, hist[::-1], pos_list[i], 1, len(hist[::-1]))) # 正樣本 [user_id, his_item, pos_item, label, len(his_item)]for negi in range(negsample):train_set.append((reviewerID, hist[::-1], neg_list[i*negsample+negi], 0,len(hist[::-1]))) # 負樣本 [user_id, his_item, neg_item, label, len(his_item)]else:# 將最長的那一個序列長度作為測試數據test_set.append((reviewerID, hist[::-1], pos_list[i],1,len(hist[::-1])))random.shuffle(train_set)random.shuffle(test_set)return train_set, test_set# 將輸入的數據進行padding,使得序列特征的長度都一致 def gen_model_input(train_set,user_profile,seq_max_len):train_uid = np.array([line[0] for line in train_set])train_seq = [line[1] for line in train_set]train_iid = np.array([line[2] for line in train_set])train_label = np.array([line[3] for line in train_set])train_hist_len = np.array([line[4] for line in train_set])train_seq_pad = pad_sequences(train_seq, maxlen=seq_max_len, padding='post', truncating='post', value=0)train_model_input = {"did": train_uid, "vid": train_iid, "hist_article_id": train_seq_pad,"hist_len": train_hist_len}return train_model_input, train_label def youtubednn_u2i_dict(data, topk=7): sparse_features = ["vid", "did"]SEQ_LEN = 30 # 用戶點擊序列的長度,短的填充,長的截斷user_profile_ = data[["did"]].drop_duplicates('did')item_profile_ = data[["vid"]].drop_duplicates('vid') # 類別編碼features = ["vid", "did"]feature_max_idx = {}for feature in features:lbe = LabelEncoder()data[feature] = lbe.fit_transform(data[feature])feature_max_idx[feature] = data[feature].max() + 1# 提取did和vid的畫像,具體選擇哪些特征還需要進一步的分析和考慮user_profile = data[["did"]].drop_duplicates('did')item_profile = data[["vid"]].drop_duplicates('vid') user_index_2_rawid = dict(zip(user_profile['did'], user_profile_['did']))item_index_2_rawid = dict(zip(item_profile['vid'], item_profile_['vid']))# 劃分訓練和測試集# 由于深度學習需要的數據量通常都是非常大的,所以為了保證召回的效果,往往會通過滑窗的形式擴充訓練樣本train_set, test_set = gen_data_set(data, 0)# 整理輸入數據,具體的操作可以看上面的函數train_model_input, train_label = gen_model_input(train_set, user_profile, SEQ_LEN)test_model_input, test_label = gen_model_input(test_set, user_profile, SEQ_LEN)# 確定Embedding的維度embedding_dim = 32# 將數據整理成模型可以直接輸入的形式user_feature_columns = [SparseFeat('did', feature_max_idx['did'], embedding_dim),VarLenSparseFeat(SparseFeat('hist_article_id', feature_max_idx['vid'], embedding_dim,embedding_name="vid"), SEQ_LEN, 'mean', 'hist_len'),]item_feature_columns = [SparseFeat('vid', feature_max_idx['vid'], embedding_dim)]from collections import Countertrain_counter = Counter(train_model_input['vid'])item_count = [train_counter.get(i, 0) for i in range(item_feature_columns[0].vocabulary_size)]sampler_config = NegativeSampler('frequency', num_sampled=5, item_name='vid', item_count=item_count)import tensorflow as tfif tf.__version__ >= '2.0.0':tf.compat.v1.disable_eager_execution()else:K.set_learning_phase(True)# 模型的定義 print("staring!")model = YoutubeDNN(user_feature_columns, item_feature_columns, user_dnn_hidden_units=(256, 64, embedding_dim),sampler_config=sampler_config)# 模型編譯model.compile(optimizer="adam", loss=sampledsoftmaxloss) history = model.fit(train_model_input, train_label, batch_size=256, epochs=10, verbose=1, validation_split=0.0)test_user_model_input = test_model_inputall_item_model_input = {"vid": item_profile['vid'].values}user_embedding_model = Model(inputs=model.user_input, outputs=model.user_embedding)item_embedding_model = Model(inputs=model.item_input, outputs=model.item_embedding)user_embs = user_embedding_model.predict(test_user_model_input, batch_size=2 ** 12)item_embs = item_embedding_model.predict(all_item_model_input, batch_size=2 ** 12)# embedding保存之前歸一化一下user_embs = user_embs / np.linalg.norm(user_embs, axis=1, keepdims=True)item_embs = item_embs / np.linalg.norm(item_embs, axis=1, keepdims=True)# 將Embedding轉換成字典的形式方便查詢raw_user_id_emb_dict = {user_index_2_rawid[k]: \v for k, v in zip(user_profile['did'], user_embs)}raw_item_id_emb_dict = {item_index_2_rawid[k]: \v for k, v in zip(item_profile['vid'], item_embs)}# 將Embedding保存到本地pickle.dump(raw_user_id_emb_dict, open('./temp_data/user_youtube_emb.pkl', 'wb'))pickle.dump(raw_item_id_emb_dict, open('./temp_data/item_youtube_emb.pkl', 'wb'))# faiss緊鄰搜索,通過user_embedding 搜索與其相似性最高的topk個vidindex = faiss.IndexFlatIP(embedding_dim)index.add(item_embs) # 將vid向量構建索引sim, idx = index.search(np.ascontiguousarray(user_embs), topk) # 通過did去查詢最相似的topk個viduser_recall_items_dict = collections.defaultdict(dict)for target_idx, sim_value_list, rele_idx_list in tqdm(zip(test_user_model_input['user_id'], sim, idx)):target_raw_id = user_index_2_rawid[target_idx]# 從1開始是為了去掉\id本身, 所以最終獲得的相似商品只有topk-1for rele_idx, sim_value in zip(rele_idx_list[1:], sim_value_list[1:]): rele_raw_id = item_index_2_rawid[rele_idx]user_recall_items_dict[target_raw_id][rele_raw_id] = user_recall_items_dict.get(target_raw_id, {})\.get(rele_raw_id, 0) + sim_valueuser_recall_items_dict = {k: sorted(v.items(), key=lambda x: x[1], reverse=True) for k, v in user_recall_items_dict.items()}return user_recall_items_dict user_recall_items_dict = youtubednn_u2i_dict(data, topk=20)

后續只要把字典讀出來,轉化成dataframe即可,不過我提交單獨youtubeDNN的召回結果,咋是0分呢!有大佬能講解一下嗎?

總結

以上是生活随笔為你收集整理的2022芒果TV算法赛_用户下一个观看视频预测_baseline_CF召回YoutubeDNN的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。