基于pytorch的电影推荐系统
本文介紹一個(gè)基于pytorch的電影推薦系統(tǒng)。
代碼移植自https://github.com/chengstone/movie_recommender。
原作者用了tf1.0實(shí)現(xiàn)了這個(gè)基于movielens的推薦系統(tǒng),我這里用pytorch0.4做了個(gè)移植。
本文實(shí)現(xiàn)的模型Github倉庫:https://github.com/Holy-Shine/movie_recommend_system
1. 總體框架
先來看下整個(gè)文件包下面的文件構(gòu)成:
其中:
Params: 保存模型的參數(shù)文件以及模型訓(xùn)練后得到的用戶和電影特征向量
data.p:保存了訓(xùn)練和測(cè)試數(shù)據(jù)
dataset.py:繼承于pytorch的Dataset類,是一個(gè)數(shù)據(jù)batch的generator
model.py:推薦系統(tǒng)的pytorch模型實(shí)現(xiàn)
main.py:主要的訓(xùn)練過程
recInterface.py: 推薦系統(tǒng)訓(xùn)練完畢后,根據(jù)模型的中間輸出結(jié)果作為電影和用戶的特征向量,這個(gè)推薦接口根據(jù)這些向量的空間關(guān)系提供一些定向推薦結(jié)果
test.py: 無用,純用來測(cè)試輸入維度是否和模型match
2. 數(shù)據(jù)集接口dataset.py
dataset.py 加載 data.p 到內(nèi)存,用生成器的方式不斷形成指定batch_size大小的批數(shù)據(jù),輸入到模型進(jìn)行訓(xùn)練。我們先來看看這個(gè)data.p 長(zhǎng)什么樣。
data.p 實(shí)際上是保存了輸入數(shù)據(jù)的pickle文件,加載完畢后是一個(gè)pandas(>=0.22.0)的DataFrame對(duì)象(如下圖所示)
用下面代碼可以加載和觀察數(shù)據(jù)集(建議使用 jupyternotebook )
import pickle as pkl
data = pkl.load(open('data.p','rb'))
data
下面來看看數(shù)據(jù)加載類怎么實(shí)現(xiàn):
class MovieRankDataset(Dataset):
def __init__(self, pkl_file):
self.dataFrame = pkl.load(open(pkl_file,'rb'))
def __len__(self):
return len(self.dataFrame)
def __getitem__(self, idx):
# user data
uid = self.dataFrame.ix[idx]['user_id']
gender = self.dataFrame.ix[idx]['user_gender']
age = self.dataFrame.ix[idx]['user_age']
job = self.dataFrame.ix[idx]['user_job']
# movie data
mid = self.dataFrame.ix[idx]['movie_id']
mtype=self.dataFrame.ix[idx]['movie_type']
mtext=self.dataFrame.ix[idx]['movie_title']
# target
rank = torch.FloatTensor([self.dataFrame.ix[idx]['rank']])
user_inputs = {
'uid': torch.LongTensor([uid]).view(1,-1),
'gender': torch.LongTensor([gender]).view(1,-1),
'age': torch.LongTensor([age]).view(1,-1),
'job': torch.LongTensor([job]).view(1,-1)
}
movie_inputs = {
'mid': torch.LongTensor([mid]).view(1,-1),
'mtype': torch.LongTensor(mtype),
'mtext': torch.LongTensor(mtext)
}
sample = {
'user_inputs': user_inputs,
'movie_inputs':movie_inputs,
'target':rank
}
return sample
pytorch要求自定義類實(shí)現(xiàn)三個(gè)函數(shù):
__init__()用來初始化一些東西
__len__() 用來獲取整個(gè)數(shù)據(jù)集的樣本個(gè)數(shù)
__getitem(idx)__根據(jù)索引idx獲取相應(yīng)的樣本
重點(diǎn)看下__getiem(idx)__,主要使用dataframe的dataFrame.ix[idx]['user_id']來獲取相應(yīng)的屬性。由于整個(gè)模型是用戶+電影雙通道輸入,所以最后將提取的屬性組裝成兩個(gè)dict,最后再組成一個(gè)sample返回。拆解過程在訓(xùn)練時(shí)進(jìn)行。(組裝時(shí)提前用torch.tensor()將向量轉(zhuǎn)為pytorch支持的tensor張量)
3. 推薦模型model.py
先看一下我們要實(shí)現(xiàn)的模型圖:
(注:圖片來自原作者倉庫)
pytorch依然要求用戶自定義的模型類至少實(shí)現(xiàn)兩個(gè)方法:__init__()和__forward__(),其中__init__()用來初始化(定義一些pytorch的線性層、卷積層、embedding層等等),__forward__()用來前向傳播和反向傳播誤差梯度信息。
分別來看下model.py里的這兩個(gè)函數(shù):
3.1 初始化函數(shù)
def __init__(self, user_max_dict, movie_max_dict, convParams, embed_dim=32, fc_size=200):
'''
Args:
user_max_dict: the max value of each user attribute. {'uid': xx, 'gender': xx, 'age':xx, 'job':xx}
user_embeds: size of embedding_layers.
movie_max_dict: {'mid':xx, 'mtype':18, 'mword':15}
fc_sizes: fully connect layer sizes. normally 2
'''
super(rec_model, self).__init__()
# --------------------------------- user channel ----------------------------------------------------------------
# user embeddings
self.embedding_uid = nn.Embedding(user_max_dict['uid'], embed_dim)
self.embedding_gender = nn.Embedding(user_max_dict['gender'], embed_dim // 2)
self.embedding_age = nn.Embedding(user_max_dict['age'], embed_dim // 2)
self.embedding_job = nn.Embedding(user_max_dict['job'], embed_dim // 2)
# user embedding to fc: the first dense layer
self.fc_uid = nn.Linear(embed_dim, embed_dim)
self.fc_gender = nn.Linear(embed_dim // 2, embed_dim)
self.fc_age = nn.Linear(embed_dim // 2, embed_dim)
self.fc_job = nn.Linear(embed_dim // 2, embed_dim)
# concat embeddings to fc: the second dense layer
self.fc_user_combine = nn.Linear(4 * embed_dim, fc_size)
# --------------------------------- movie channel -----------------------------------------------------------------
# movie embeddings
self.embedding_mid = nn.Embedding(movie_max_dict['mid'], embed_dim) # normally 32
self.embedding_mtype_sum = nn.EmbeddingBag(movie_max_dict['mtype'], embed_dim, mode='sum')
self.fc_mid = nn.Linear(embed_dim, embed_dim)
self.fc_mtype = nn.Linear(embed_dim, embed_dim)
# movie embedding to fc
self.fc_mid_mtype = nn.Linear(embed_dim * 2, fc_size)
# text convolutional part
# wordlist to embedding matrix B x L x D L=15 15 words
self.embedding_mwords = nn.Embedding(movie_max_dict['mword'], embed_dim)
# input word vector matrix is B x 15 x 32
# load text_CNN params
kernel_sizes = convParams['kernel_sizes']
# 8 kernel, stride=1,padding=0, kernel_sizes=[2x32, 3x32, 4x32, 5x32]
self.Convs_text = [nn.Sequential(
nn.Conv2d(1, 8, kernel_size=(k, embed_dim)),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(15 - k + 1, 1), stride=(1, 1))
).to(device) for k in kernel_sizes]
# movie channel concat
self.fc_movie_combine = nn.Linear(embed_dim * 2 + 8 * len(kernel_sizes), fc_size) # tanh
# BatchNorm layer
self.BN = nn.BatchNorm2d(1)
__init__() 有5個(gè)參數(shù):
user_max_dict/movie_max_dict:用戶/電影字典,即用戶/電影的一些屬性的最大值,決定我們的模型的embedding表的寬度。
user_max_dict={
'uid':6041, # 6040 users
'gender':2,
'age':7,
'job':21
}
movie_max_dict={
'mid':3953, # 3952 movies
'mtype':18,
'mword':5215 # 5215 words
}
在我們的模型中,這些字典作為固定的參數(shù)被傳入。
convParams:文本卷積網(wǎng)絡(luò)超參,表示網(wǎng)絡(luò)層數(shù)和卷積核大小。
convParams={
'kernel_sizes':[2,3,4,5]
}
embed_dim:全局的embed大小,表示特征空間的維度。
fc_size: 最后的全連接神經(jīng)元個(gè)數(shù)
最后分別根據(jù)用戶通道定義一些全連接層、embedding層、文本卷積層(標(biāo)題文本已經(jīng)被one-hot化存入數(shù)據(jù)集中)
3.2 前向傳播
直接看代碼吧:
def forward(self, user_input, movie_input):
# pack train_data
uid = user_input['uid']
gender = user_input['gender']
age = user_input['age']
job = user_input['job']
mid = movie_input['mid']
mtype = movie_input['mtype']
mtext = movie_input['mtext']
if torch.cuda.is_available():
uid, gender, age, job,mid,mtype,mtext =
uid.to(device), gender.to(device), age.to(device), job.to(device), mid.to(device), mtype.to(device), mtext.to(device)
# user channel
feature_uid = self.BN(F.relu(self.fc_uid(self.embedding_uid(uid))))
feature_gender = self.BN(F.relu(self.fc_gender(self.embedding_gender(gender))))
feature_age = self.BN(F.relu(self.fc_age(self.embedding_age(age))))
feature_job = self.BN(F.relu(self.fc_job(self.embedding_job(job))))
# feature_user B x 1 x 200
feature_user = F.tanh(self.fc_user_combine(
torch.cat([feature_uid, feature_gender, feature_age, feature_job], 3)
)).view(-1,1,200)
# movie channel
feature_mid = self.BN(F.relu(self.fc_mid(self.embedding_mid(mid))))
feature_mtype = self.BN(F.relu(self.fc_mtype(self.embedding_mtype_sum(mtype)).view(-1,1,1,32)))
# feature_mid_mtype = torch.cat([feature_mid, feature_mtype], 2)
# text cnn part
feature_img = self.embedding_mwords(mtext) # to matrix B x 15 x 32
flattern_tensors = []
for conv in self.Convs_text:
flattern_tensors.append(conv(feature_img.view(-1,1,15,32)).view(-1,1, 8)) # each tensor: B x 8 x1 x 1 to B x 8
feature_flattern_dropout = F.dropout(torch.cat(flattern_tensors,2), p=0.5) # to B x 32
# feature_movie B x 1 x 200
feature_movie = F.tanh(self.fc_movie_combine(
torch.cat([feature_mid.view(-1,1,32), feature_mtype.view(-1,1,32), feature_flattern_dropout], 2)
))
output = torch.sum(feature_user * feature_movie, 2) # B x rank
return output, feature_user, feature_movie
分為兩步:
拆解數(shù)據(jù):根據(jù)用戶和電影dict的鍵值拆解sample里的數(shù)據(jù)
前向傳播:沒有特別的,就是用__init__()定義的網(wǎng)絡(luò)層來傳遞張量即可。
4. 主程序main.py
還是先來看代碼:
def train(model,num_epochs=5, lr=0.0001):
loss_function = nn.MSELoss()
optimizer = optim.Adam(model.parameters(),lr=lr)
datasets = MovieRankDataset(pkl_file='data.p')
dataloader = DataLoader(datasets,batch_size=256,shuffle=True)
losses=[]
writer = SummaryWriter()
for epoch in range(num_epochs):
loss_all = 0
for i_batch,sample_batch in enumerate(dataloader):
user_inputs = sample_batch['user_inputs']
movie_inputs = sample_batch['movie_inputs']
target = sample_batch['target'].to(device)
model.zero_grad()
tag_rank , _ , _ = model(user_inputs, movie_inputs)
loss = loss_function(tag_rank, target)
if i_batch%20 ==0:
writer.add_scalar('data/loss', loss, i_batch*20)
print(loss)
loss_all += loss
loss.backward()
optimizer.step()
print('Epoch {}: loss:{}'.format(epoch,loss_all))
writer.export_scalars_to_json("./test.json")
writer.close()
if __name__=='__main__':
model = rec_model(user_max_dict=user_max_dict, movie_max_dict=movie_max_dict, convParams=convParams)
model=model.to(device)
# train model
#train(model=model,num_epochs=1)
#torch.save(model.state_dict(), 'Params/model_params.pkl')
# get user and movie feature
# model.load_state_dict(torch.load('Params/model_params.pkl'))
# from recInterface import saveMovieAndUserFeature
# saveMovieAndUserFeature(model=model)
# test recsys
from recInterface import getKNNitem,getUserMostLike
print(getKNNitem(itemID=100,K=10))
print(getUserMostLike(uid=100))
流程大致如下:
調(diào)用 model.py 構(gòu)建推薦模型。
訓(xùn)練模型train(model,num_epochs=5, lr=0.0001)
選擇損失函數(shù)
選擇優(yōu)化器Adam
構(gòu)建數(shù)據(jù)加載器dataloader
開始訓(xùn)練,反向傳播,優(yōu)化參數(shù)
保存模型參數(shù)
5. 推薦接口recInterface.py
模型訓(xùn)練結(jié)束后,我們可以得到電影的特征和用戶的特征(可以看網(wǎng)絡(luò)圖中最后一層連接前兩個(gè)通道的輸出即為用戶/電影特征,我們?cè)谟?xùn)練結(jié)束后將其返回并保存起來)。
使用recInterface.py里的saveMovieAndUserFeature(model)可以將這兩個(gè)特征保存為Params/feature_data.pkl,同時(shí)保存用戶和電影的字典,用來獲取特定用戶或者電影的信息,格式以用戶為例:{'uid':uid,'gender':gender,'age':age,'job':job}
def def saveMovieAndUserFeature(model):
'''
Save Movie and User feature into HD
'''
batch_size = 256
datasets = MovieRankDataset(pkl_file='data.p')
dataloader = DataLoader(datasets, batch_size=batch_size, shuffle=False,num_workers=4)
# format: {id(int) : feature(numpy array)}
user_feature_dict = {}
movie_feature_dict = {}
movies={}
users = {}
with torch.no_grad():
for i_batch, sample_batch in enumerate(dataloader):
user_inputs = sample_batch['user_inputs']
movie_inputs = sample_batch['movie_inputs']
# B x 1 x 200 = 256 x 1 x 200
_, feature_user, feature_movie = model(user_inputs, movie_inputs)
# B x 1 x 200 = 256 x 1 x 200
feature_user = feature_user.cpu().numpy()
feature_movie = feature_movie.cpu().numpy()
for i in range(user_inputs['uid'].shape[0]):
uid = user_inputs['uid'][i] # uid
gender = user_inputs['gender'][i]
age = user_inputs['age'][i]
job = user_inputs['job'][i]
mid = movie_inputs['mid'][i] # mid
mtype = movie_inputs['mtype'][i]
mtext = movie_inputs['mtext'][i]
if uid.item() not in users.keys():
users[uid.item()]={'uid':uid,'gender':gender,'age':age,'job':job}
if mid.item() not in movies.keys():
movies[mid.item()]={'mid':mid,'mtype':mtype, 'mtext':mtext}
if uid not in user_feature_dict.keys():
user_feature_dict[uid]=feature_user[i]
if mid not in movie_feature_dict.keys():
movie_feature_dict[mid]=feature_movie[i]
print('Solved: {} samples'.format((i_batch+1)*batch_size))
feature_data = {'feature_user': user_feature_dict, 'feature_movie':movie_feature_dict}
dict_user_movie={'user': users, 'movie':movies}
pkl.dump(feature_data,open('Params/feature_data.pkl','wb'))
pkl.dump(dict_user_movie, open('Params/user_movie_dict.pkl','wb'))(model):
'''
Save Movie and User feature into HD
'''
batch_size = 256
datasets = MovieRankDataset(pkl_file='data.p')
dataloader = DataLoader(datasets, batch_size=batch_size, shuffle=False,num_workers=4)
# format: {id(int) : feature(numpy array)}
user_feature_dict = {}
movie_feature_dict = {}
movies={}
users = {}
with torch.no_grad():
for i_batch, sample_batch in enumerate(dataloader):
user_inputs = sample_batch['user_inputs']
movie_inputs = sample_batch['movie_inputs']
# B x 1 x 200 = 256 x 1 x 200
_, feature_user, feature_movie = model(user_inputs, movie_inputs)
# B x 1 x 200 = 256 x 1 x 200
feature_user = feature_user.cpu().numpy()
feature_movie = feature_movie.cpu().numpy()
for i in range(user_inputs['uid'].shape[0]):
uid = user_inputs['uid'][i] # uid
gender = user_inputs['gender'][i]
age = user_inputs['age'][i]
job = user_inputs['job'][i]
mid = movie_inputs['mid'][i] # mid
mtype = movie_inputs['mtype'][i]
mtext = movie_inputs['mtext'][i]
if uid.item() not in users.keys():
users[uid.item()]={'uid':uid,'gender':gender,'age':age,'job':job}
if mid.item() not in movies.keys():
movies[mid.item()]={'mid':mid,'mtype':mtype, 'mtext':mtext}
if uid not in user_feature_dict.keys():
user_feature_dict[uid]=feature_user[i]
if mid not in movie_feature_dict.keys():
movie_feature_dict[mid]=feature_movie[i]
print('Solved: {} samples'.format((i_batch+1)*batch_size))
feature_data = {'feature_user': user_feature_dict, 'feature_movie':movie_feature_dict}
dict_user_movie={'user': users, 'movie':movies}
pkl.dump(feature_data,open('Params/feature_data.pkl','wb'))
pkl.dump(dict_user_movie, open('Params/user_movie_dict.pkl','wb'))
recInterface.py還有其他的功能函數(shù):
getKNNitem(itemID,itemName='movie',K=1): 根據(jù)項(xiàng)目的id得到K近鄰項(xiàng)目,如果itemName='user',那么就是獲取K近鄰的用戶。
邏輯很簡(jiǎn)單:
根據(jù)itemName提取保存在本地的相應(yīng)的用戶/電影特征集合
根據(jù)itemID獲取目標(biāo)用戶的特征
求其特征與其他所有用戶/電影的cosine相似度
排序后返回前k個(gè)用戶/電影即可
getUserMostLike(uid): 獲取用戶id為uid的用戶最喜歡的電影
過程也很容易理解:
依次對(duì)uid對(duì)應(yīng)的用戶特征和所有電影特征做一個(gè)點(diǎn)積操作
該點(diǎn)擊操作視為用戶對(duì)電影的評(píng)分,對(duì)這些評(píng)分做一個(gè)sort操作
返回評(píng)分最高的即可。
6. 聲明
大家有問題可以直接在Github倉庫的issue里提問。
https://github.com/Holy-Shine/movie_recommend_system
總結(jié)
以上是生活随笔為你收集整理的基于pytorch的电影推荐系统的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: MODIS数据的下载(新地址)
- 下一篇: Jqprint 轻量级页面打印插件