日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

GraphSage模型cora数据集

發布時間:2024/9/18 编程问答 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 GraphSage模型cora数据集 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

創建GraphSage模型:

input_size=1433 out_size=128 num_layers=2 agg_func='MEAN' raw_features:原始特征,維度2708 * 1433 adj_lists:所有邊的連接,格式是:562: {60, 252, 370, 440, 671, 1117, 1183, 1401, 1889, 2014, 2018}, sage_layer1:SageLayer sage_layer2:SageLayer(layer_size=128,out_size=128)

第一層sage_layer1:

input_size=1433 out_size=128 weight:權重參數,并初始化,維度128 * (1433*2)=128 * 2866

第二層sage_layer2:

input_size=128 out_size=128 weight:權重參數,并初始化,維度128 * (128*2)=128 * 256

創建Classification模型:

self.layer=全連接層 weight:權重參數,并初始化,維度128 * 7

進入GraphSage中的前向傳播forward:

第一批20個節點數據 nodes_batch={1, 16, 131, 245, 439, 501, 527, 658, 699, 716, 944, 963, 1003, 1081, 1439, 1582, 1660, 1681, 2007, 2577}

得到nodes_batch_layers,內容如下:

# [1, 16, 131, 245, 439, 501, 527, 658, 699, 716, 944, 963, 1003, 1081, 1439, 1582, 1660, 1681, 2007, 2577}] nodes_batch_layers[2][0]:原始batch的20個數據節點list# [1, 131, 388, 2180, 2182, 1545, 521, 1020, 527, 16, 1681, 1682, 658, 2577, 791, 663, 157, 1439, 672, 1697, 803, 420, 163, 294, 2342, 552, 1577, 427, 172, 2606, 174, 1968, 1582, 944, 2483, 53, 566, 439, 952, 1081, 1082, 699, 1083, 1469, 1086, 2367, 192, 1337, 1335, 963, 68, 588, 76, 716, 2508, 466, 1364, 2007, 344, 97, 2146, 611, 1636, 2017, 2662, 1897, 747, 1003, 1004, 2028, 1519, 496, 632, 1259, 1779, 1780, 245, 501, 503, 1656, 1908, 2677, 1909, 1660] nodes_batch_layers[1][0]:與20個節點相連的84個鄰居節點list,包含上一層原始的20個節點 # [{527, 747, 1656, 1968, 2606, 2662}, # {1681, 1682}, # {97, 192, 388, 420, 439}, # {68, 1779, 1780, 2007}, # {53, 803, 1439}, # {344, 963, 1519}, # {294, 566, 588, 699, 747}, # {76, 131}, # {672, 791, 1003, 1004, 1364, 1469, 1897, 2028, 2146, 2182, 2483}, # {1, 344}, # {163, 427, 658, 1577}, # {952, 1083, 1660}, # {16, 466, 552, 566, 1697}, # {157, 611, 716}, # {172, 245, 1636}, # {1020, 2180, 2342, 2367, 2508, 2577}, # {174, 496, 501, 503, 663}, # {632, 1086, 1545, 1582, 1908, 1909, 2017, 2677}, # {521, 1081, 1082, 1259}, # {944, 1335, 1337}] nodes_batch_layers[1][1]:與20個節點相連鄰居節點 {1: 0, # 131: 1, # 388: 2, # 2180: 3, # 2182: 4, # 1545: 5, # 521: 6, # 1020: 7, # 527: 8, # 16: 9, # 1681: 10, # 1682: 11, # 658: 12, # 2577: 13, # 791: 14, # 663: 15, # 157: 16, # 1439: 17, # 672: 18, # 1697: 19, # 803: 20, # 420: 21, # 163: 22, # 294: 23, # 2342: 24, # 552: 25, # 1577: 26, # 427: 27, # 172: 28, # 2606: 29, # 174: 30, # 1968: 31, # 1582: 32, # 944: 33, # 2483: 34, # 53: 35, # 566: 36, # 439: 37, # 952: 38, # 1081: 39, # 1082: 40, # 699: 41, # 1083: 42, # 1469: 43, # 1086: 44, # 2367: 45, # 192: 46, # 1337: 47, # 1335: 48, # 963: 49, # 68: 50, # 588: 51, # 76: 52, # 716: 53, # 2508: 54, # 466: 55, # 1364: 56, # 2007: 57, # 344: 58, # 97: 59, # 2146: 60, # 611: 61, # 1636: 62, # 2017: 63, # 2662: 64, # 1897: 65, # 747: 66, # 1003: 67, # 1004: 68, # 2028: 69, # 1519: 70, # 496: 71, # 632: 72, # 1259: 73, # 1779: 74, # 1780: 75, # 245: 76, # 501: 77, # 503: 78, # 1656: 79, # 1908: 80, # 2677: 81, # 1909: 82, # 1660: 83} nodes_batch_layers[1][2]:84個節點進行dict編號nodes_batch_layers[0][0]:與84個節點相連的304個鄰居節點list,包含84個自身節點 nodes_batch_layers[0][1]:84個節點的鄰居節點list nodes_batch_layers[0][2]:304個節點的dict自編編號

接下來進行aggregation操作:

index=1 nb = 自己和鄰居節點list 共84個 pre_neighs = 這層節點的上層鄰居的所有信息,包括:聚合自己和鄰居節點,點的dict,涉及到的所有節點aggregation: 參數:nodes,當前節點list 84個 參數:pre_hidden_embs,數據特征,維度:2708 * 1433 參數:pre_neighs,上層鄰居的所有信息 embed_matrix:當前304個節點的特征,304*1433 mask:構建 84*304 的鄰接矩陣,并且進行歸一化mask=mask/mask.sum(dim=1), aggregate_feats:聚合鄰居特征,mask * embed_matrix ,生成維度84*1433 總結:當前節點84個,聚合了周圍304個鄰居節點的特征,對特征進行歸一化,生成特征維度84*1433sageLayer: 參數:self_feats=原始特征 維度84*1433 參數:aggregate_feats=聚合了304個鄰居的特征 維度84*1433 combined:將這兩個特征進行拼接,生成維度84*2866 F.relu(combined * W),進行激活函數生成維度84*128的特征返回特征 84*128index=2 nb=最初的20個節點 pre_neighs=這層節點的上層鄰居的所有信息,包括:聚合自己和鄰居節點,點的dict,涉及到的所有節點aggregation: 參數:nodes,當前節點list 20個 參數:pre_hidden_embs,數據特征,維度:84*128 參數:pre_neighs,上層鄰居的所有信息 embed_matrix:=pre_hidden_embs 利用上一層的節點特征,即84個節點的特征,84*128 mask:構建 20*84 的鄰接矩陣,并且進行歸一化mask=mask/mask.sum(dim=1), aggregate_feats:聚合鄰居特征,mask * embed_matrix ,生成維度20*128 總結:當前節點20個,聚合了周圍84個鄰居節點的特征,(這84個鄰居節點特征是上一層計算得到的,因此維度是84*128,而不是84*1433),對特征進行歸一化,最終生成特征維度20*128sageLayer: 參數:self_feats=原始特征 維度20*128 參數:aggregate_feats=聚合了20個鄰居的特征 維度20*128 combined:將這兩個特征進行拼接,生成維度20*256 F.relu(combined * W),進行激活函數生成維度20*128的特征最終這20個節點返回特征 20*128

接下來就要通過Classification模型進行計算,得到維度20 * 7

logists= tensor([[-1.9839, -1.9866, -2.0804, -1.9014, -1.8841, -1.7800, -2.0367],[-1.9142, -1.9110, -2.0967, -1.8599, -2.0505, -1.8341, -1.9832],[-1.9683, -1.9229, -1.9720, -1.9572, -1.9711, -1.8678, -1.9667],[-1.9209, -2.0136, -2.0654, -1.8666, -1.9862, -1.8006, -1.9937],[-2.0788, -2.0250, -1.9542, -1.8534, -1.8923, -1.8745, -1.9634],[-2.0361, -1.8748, -2.0676, -1.8287, -1.9016, -1.8289, -2.1284],[-2.0782, -1.8997, -1.9607, -1.9720, -1.9491, -1.9568, -1.8227],[-2.0626, -1.8991, -2.0198, -1.8183, -1.9802, -1.8021, -2.0779],[-1.9372, -1.8791, -1.9737, -1.9121, -1.9777, -1.8976, -2.0547],[-2.0668, -1.9538, -1.9821, -1.8051, -2.0310, -1.8033, -2.0139],[-2.0350, -1.9644, -1.9913, -1.8626, -1.9652, -1.8815, -1.9324],[-2.0077, -1.8961, -2.0062, -2.0043, -1.8142, -1.8510, -2.0695],[-1.9880, -1.9198, -1.9179, -1.8831, -1.9339, -2.0266, -1.9590],[-2.0564, -1.9405, -1.9988, -1.8407, -1.8986, -1.9623, -1.9384],[-2.0603, -1.9230, -1.9647, -1.8786, -1.9230, -1.9342, -1.9470],[-2.0483, -1.9780, -2.0290, -1.9329, -1.8557, -1.8542, -1.9407],[-1.9123, -1.9745, -1.9420, -1.9195, -1.9199, -1.9526, -2.0038],[-2.0144, -1.9391, -1.9132, -1.8205, -1.9620, -1.9279, -2.0622],[-2.1094, -1.7795, -2.0029, -1.8137, -2.0519, -1.8241, -2.1027],[-2.0052, -1.9684, -2.0337, -1.8064, -1.9325, -1.8070, -2.1065]],loss_sup = -torch.sum(logists[range(logists.size(0)), labels_batch], 0) 根據label獲取對應的loss值求和 = tensor([-1.9839, -1.8599, -1.9683, -2.0654, -1.8745, -1.8748, -1.9607, -1.8183,-1.9777, -1.9538, -1.8815, -2.0043, -1.9179, -2.0564, -1.9470, -1.9329,-1.9123, -1.8205, -1.7795, -1.8064], grad_fn=<IndexBackward>)loss_sup /= len(nodes_batch) =1.9198

這樣第一批數據的20個節點就計算完成了:

array([ 527, 1681, 439, 2007, 1439, 963, 699, 131, 1003, 1, 658,1660, 16, 716, 245, 2577, 501, 1582, 1081, 944]) Step [1/68], Loss: 1.9198, Dealed Nodes [20/1355]

總結

以上是生活随笔為你收集整理的GraphSage模型cora数据集的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。