Pytorch 实现 MLP
生活随笔
收集整理的這篇文章主要介紹了
Pytorch 实现 MLP
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
torch.nn是專門為神經(jīng)網(wǎng)絡(luò)設(shè)計的模塊化接口。nn構(gòu)建于 Autograd之上,可用來定義和運行神經(jīng)網(wǎng)絡(luò)。
nn.functional,這個包中包含了神經(jīng)網(wǎng)絡(luò)中使用的一些常用函數(shù),這些函數(shù)的特點是,不具有可學(xué)習的參數(shù)(如ReLU,pool,DropOut等),這些函數(shù)可以放在構(gòu)造函數(shù)中,也可以不放,但是這里建議不放。
定義一個網(wǎng)絡(luò)
PyTorch中已經(jīng)為我們準備好了現(xiàn)成的網(wǎng)絡(luò)模型,只要繼承nn.Module,并實現(xiàn)它的forward方法,PyTorch會根據(jù)autograd,自動實現(xiàn)backward函數(shù)。
import torch import torch.nn as nn import torch.nn.functional as Fclass MLP(nn.Module):def __init__(self, user_num, user_dim, layers=[32, 16, 8]):super(MLP, self).__init__() # 子類函數(shù)必須在構(gòu)造函數(shù)中執(zhí)行父類構(gòu)造函數(shù)self.user_Embedding = nn.Embedding(user_num, user_dim)self.mlp = nn.Sequential() for id in range(1, len(layers)): # 這樣可以實現(xiàn)MLP層數(shù)和每層神經(jīng)單元數(shù)的自動調(diào)整self.mlp.add_module("Linear_layer_%d" % id, nn.Linear(layers[id - 1], layers[id]))self.mlp.add_module("Relu_layer_%d" % id, nn.ReLU(inplace=True))self.predict = nn.Sequential(nn.Linear(layers[-1], 1),nn.Sigmoid(),)def forward(self, x):user = self.user_Embedding(x)user = self.mlp(user)score = self.predict(user)return scoremodel = MLP(1000, 64) print(model) MLP((user_Embedding): Embedding(1000, 64)(mlp): Sequential((Linear_layer_1): Linear(in_features=32, out_features=16, bias=True)(Relu_layer_1): ReLU(inplace=True)(Linear_layer_2): Linear(in_features=16, out_features=8, bias=True)(Relu_layer_2): ReLU(inplace=True))(predict): Sequential((0): Linear(in_features=8, out_features=1, bias=True)(1): Sigmoid()) ) for parameters in model.parameters():print(parameters) Parameter containing: tensor([[ 0.4192, -1.0525, 1.4208, 0.5376, 2.1371, 0.7074, 0.1017, 0.9701, 1.2824, -0.0436],[-0.6374, 0.0153, -0.1862, -0.6061, 0.5522, -1.1526, 0.3913, 0.3103,-0.1055, 0.6098],[-0.0367, -0.9573, -0.5106, -1.2440, 1.2201, -0.5424, 0.2045, 0.2208,-0.7557, -0.7811],[ 0.5457, 0.3586, 0.9871, -0.2117, 1.0885, 1.7162, -0.2125, 0.2652,-0.3262, 0.3047],[ 0.1039, 0.8132, 0.6638, 0.2618, 0.8552, 0.8300, 0.2349, 1.8830,-0.5149, -1.0468]], requires_grad=True) Parameter containing: tensor([[-0.2395, 0.1461, -0.0161, 0.0267, -0.0353, 0.2085, 0.0046, -0.1572],[ 0.2267, 0.0129, -0.3296, -0.2270, 0.2268, 0.1771, -0.0992, 0.2148],[ 0.1906, 0.1896, -0.2703, -0.3506, 0.0248, 0.1949, -0.3117, 0.0721],[-0.3197, 0.2782, -0.1553, 0.2509, 0.0279, 0.2040, -0.1478, 0.2943]],requires_grad=True) Parameter containing: tensor([ 0.0808, -0.3252, -0.0015, -0.0666], requires_grad=True) Parameter containing: tensor([[-0.3243, 0.4393, -0.2430, 0.4330]], requires_grad=True) Parameter containing: tensor([-0.0739], requires_grad=True) for name,parameters in model.named_parameters():print(name,':',parameters.size()) user_Embedding.weight : torch.Size([5, 10]) mlp.Linear_layer_1.weight : torch.Size([4, 8]) mlp.Linear_layer_1.bias : torch.Size([4]) predict.0.weight : torch.Size([1, 4]) predict.0.bias : torch.Size([1])總結(jié)
以上是生活随笔為你收集整理的Pytorch 实现 MLP的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 运用事理图谱搞事情:新闻预警、事件监测、
- 下一篇: ML/DL常用评估方法