日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

GCN(二)GCN模型介绍

發布時間:2024/9/18 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 GCN(二)GCN模型介绍 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

上一節介紹了處理cora數據集,以及返回的結果:

  • features:論文的屬性特征,維度2708×14332708 \times 14332708×1433,并且做了歸一化,即每一篇論文屬性值的和為1.
  • labels:每一篇論文對應的分類編號:0-6
  • adj:鄰接矩陣,維度2708×27082708 \times 27082708×2708
  • idx_train:0-139
  • idx_val:200-499
  • idx_test:500-1499

這一節介紹GCN的模型。

GCN 模型

model:

import torch.nn as nn import torch.nn.functional as F from pygcn.layers import GraphConvolutionclass GCN(nn.Module):def __init__(self, nfeat, nhid, nclass, dropout):super(GCN, self).__init__()self.gc1 = GraphConvolution(nfeat, nhid) # 構建第一層 GCNself.gc2 = GraphConvolution(nhid, nclass) # 構建第二層 GCNself.dropout = dropoutdef forward(self, x, adj):x = F.relu(self.gc1(x, adj))x = F.dropout(x, self.dropout, training=self.training)x = self.gc2(x, adj)return F.log_softmax(x, dim=1)

layers:

import mathimport torchfrom torch.nn.parameter import Parameter from torch.nn.modules.module import Moduleclass GraphConvolution(Module):"""Simple GCN layer, similar to https://arxiv.org/abs/1609.02907"""def __init__(self, in_features, out_features, bias=True):super(GraphConvolution, self).__init__()self.in_features = in_featuresself.out_features = out_featuresself.weight = Parameter(torch.FloatTensor(in_features, out_features)) # input_features, out_featuresif bias:self.bias = Parameter(torch.FloatTensor(out_features))else:self.register_parameter('bias', None)self.reset_parameters()def reset_parameters(self):stdv = 1. / math.sqrt(self.weight.size(1))self.weight.data.uniform_(-stdv, stdv) # 隨機化參數if self.bias is not None:self.bias.data.uniform_(-stdv, stdv)def forward(self, input, adj):support = torch.mm(input, self.weight) # GraphConvolution forward。input*weightoutput = torch.spmm(adj, support) # 稀疏矩陣的相乘,和mm一樣的效果if self.bias is not None:return output + self.biaselse:return outputdef __repr__(self):return self.__class__.__name__ + ' (' \+ str(self.in_features) + ' -> ' \+ str(self.out_features) + ')'

初始化模型

調用模型:

model = GCN(nfeat=features.shape[1],nhid=args.hidden,nclass=labels.max().item() + 1,dropout=args.dropout)

具體參數:

model = GCN(nfeat=1433,nhid=16,nclass=7,dropout=0.5)

初始化模型兩層GCN:

self.gc1 = GraphConvolution(nfeat=1433, nhid=16) # 構建第一層 GCN self.gc2 = GraphConvolution(nhid=16, nclass=7) # 構建第二層 GCN self.dropout = 0.5

初始化具體layer:
第一層:gc1

def __init__(self, in_features, out_features, bias=True):super(GraphConvolution, self).__init__()self.in_features = 1433self.out_features = 16self.weight = Parameter(torch.FloatTensor(1433, 16)) # input_features, out_featuresself.bias = Parameter(torch.FloatTensor(16))self.reset_parameters() # 初始化w和b

參數www的維度:W1433×16W_{1433 \times 16}W1433×16?
參數bbb的維度:b1×16b_{1 \times 16}b1×16?
第二層:gc2

def __init__(self, in_features, out_features, bias=True):super(GraphConvolution, self).__init__()self.in_features = 16self.out_features = 7self.weight = Parameter(torch.FloatTensor(16, 7)) # input_features, out_featuresself.bias = Parameter(torch.FloatTensor(7))self.reset_parameters() # 初始化w和b

參數www的維度:W1433×16W_{1433 \times 16}W1433×16?
參數bbb的維度: b1×7b_{1 \times 7}b1×7?

forward執行模型

  • 首先執行model:
  • def forward(self, x, adj):x = F.relu(self.gc1(x, adj))x = F.dropout(x, self.dropout, training=self.training)x = self.gc2(x, adj)return F.log_softmax(x, dim=1)
  • 執行self.gc1(x, adj),x表示輸入特征,維度2708×14332708 \times 14332708×1433,adj表示鄰接矩陣,維度2708×27082708 \times 27082708×2708

  • 執行GCN layer gc1層,

  • support = torch.mm(input, self.weight) # GraphConvolution forward。input*weightoutput = torch.spmm(adj, support)

    計算output,output2708×16=adj2708×2708×input2708×1433×W1433×16output_{2708 \times 16} = adj_{2708 \times 2708} \times input_{2708 \times 1433} \times W_{1433 \times 16}output2708×16?=adj2708×2708?×input2708×1433?×W1433×16?,然后返回output=output2708×16+bias1×16output = output_{2708 \times 16} + bias_{1 \times 16}output=output2708×16?+bias1×16?

    output[0]= tensor([ 0.0201, -0.0242, 0.0608, 0.0272, 0.0133, 0.0085, 0.0084, -0.0265,0.0149, -0.0100, 0.0077, 0.0029, 0.0145, -0.0181, -0.0021, -0.0183],grad_fn=<SelectBackward>) self.bias= Parameter containing: tensor([-0.2232, -0.0295, -0.1387, 0.2170, -0.1749, -0.1551, 0.1056, -0.1860,-0.0666, -0.1327, 0.0212, 0.1587, 0.2496, -0.0154, -0.1683, 0.0151],requires_grad=True) (output + self.bias)[0]= tensor([-0.2030, -0.0537, -0.0779, 0.2442, -0.1616, -0.1466, 0.1140, -0.2125,-0.0516, -0.1427, 0.0289, 0.1615, 0.2641, -0.0336, -0.1704, -0.0032],grad_fn=<SelectBackward>)
  • 使用ReluReluRelu激活函數,
  • x = F.relu(self.gc1(x, adj)) x[0]= tensor([0.0000, 0.0000, 0.0000, 0.2442, 0.0000, 0.0000, 0.1140, 0.0000, 0.0000,0.0000, 0.0289, 0.1615, 0.2641, 0.0000, 0.0000, 0.0000],grad_fn=<SelectBackward>)
  • 在training階段,使用dropoutdropoutdropout, 執行x=x1?0.5x=\frac{x}{1-0.5}x=1?0.5x?,并以0.5的概率去除:
  • x = F.dropout(x, self.dropout, training=self.training) x[0]= tensor([0.0000, 0.0000, 0.0000, 0.4884, 0.0000, 0.0000, 0.2280, 0.0000, 0.0000,0.0000, 0.0000, 0.3230, 0.5282, 0.0000, 0.0000, 0.0000],grad_fn=<SelectBackward>)
  • 執行第二層 gc2
  • support = torch.mm(input, self.weight) # GraphConvolution forward。input*weightoutput = torch.spmm(adj, support)

    計算output,output2708×7=adj2708×2708×input2708×16×W16×7output_{2708 \times 7} = adj_{2708 \times 2708} \times input_{2708 \times 16} \times W_{16 \times 7}output2708×7?=adj2708×2708?×input2708×16?×W16×7?,然后返回output=output2708×7+bias1×7output = output_{2708 \times 7} + bias_{1 \times 7}output=output2708×7?+bias1×7?

    output[0]= tensor([-0.1928, 0.1723, 0.1689, -0.0516, 0.0387, -0.0276, -0.1027],grad_fn=<SelectBackward>)
  • 將返回結果x,直接吐給F.log_softmax(x,dim=1)F.log\_softmax(x, dim=1)F.log_softmax(x,dim=1)dim=1dim=1dim=1表示對7維度進行log_softmax
  • x[0]= tensor([-2.1474, -1.7823, -1.7856, -2.0062, -1.9158, -1.9822, -2.0573],grad_fn=<SelectBackward>)
  • 將output與label進行計算loss 與 acc_train
    loss=tensor(1.9186, grad_fn=<NllLossBackward>) acc_train=tensor(0.1357, dtype=torch.float64)
  • 最后進行反向傳播,更新梯度W和b
  • 完成一次train的過程
  • 總結

    以上是生活随笔為你收集整理的GCN(二)GCN模型介绍的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。