日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

【论文阅读】A Gentle Introduction to Graph Neural Networks [图神经网络入门](4)

發(fā)布時(shí)間:2023/12/15 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【论文阅读】A Gentle Introduction to Graph Neural Networks [图神经网络入门](4) 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

【論文閱讀】A Gentle Introduction to Graph Neural Networks [圖神經(jīng)網(wǎng)絡(luò)入門](4)

The challenges of using graphs in machine learning
圖在機(jī)器學(xué)習(xí)中的應(yīng)用挑戰(zhàn)

So, how do we go about solving these different graph tasks with neural networks? The first step is to think about how we will represent graphs to be compatible with neural networks.
那么,我們?nèi)绾斡蒙窠?jīng)網(wǎng)絡(luò)來(lái)解決這些不同的圖的預(yù)測(cè)任務(wù)呢?第一步是考慮如何表示與神經(jīng)網(wǎng)絡(luò)兼容的圖。

Machine learning models typically take rectangular or grid-like arrays as input. So, it’s not immediately intuitive how to represent them in a format that is compatible with deep learning. Graphs have up to four types of information that we will potentially want to use to make predictions: nodes, edges, global-context and connectivity. The first three are relatively straightforward: for example, with nodes we can form a node feature matrix NNN by assigning each node an index iii and storing the feature for nodeinode_inodei? in NNN. While these matrices have a variable number of examples, they can be processed without any special techniques.
機(jī)器學(xué)習(xí)模型通常采用矩形或網(wǎng)格狀array作為輸入。因此,如何用一種與深度學(xué)習(xí)兼容的格式來(lái)表示它們并不是一種直觀的方法。圖有多達(dá)四種類型的信息,我們可能希望使用它們來(lái)進(jìn)行預(yù)測(cè):節(jié)點(diǎn)、邊、全局上下文和連通性。前三個(gè)相對(duì)簡(jiǎn)單: 例如,對(duì)于節(jié)點(diǎn),我們可以為每個(gè)節(jié)點(diǎn)分配一個(gè)索引iii,并將nodeinode_inodei?的特征存儲(chǔ)在NNN中,從而形成一個(gè)節(jié)點(diǎn)特征矩陣NNN。雖然這些矩陣的示例數(shù)量是可變的,但它們無(wú)需任何特殊技術(shù)就可以處理。

However, representing a graph’s connectivity is more complicated. Perhaps the most obvious choice would be to use an adjacency matrix, since this is easily tensorisable. However, this representation has a few drawbacks. From the example dataset table, we see the number of nodes in a graph can be on the order of millions, and the number of edges per node can be highly variable. Often, this leads to very sparse adjacency matrices, which are space-inefficient.
然而,表示圖的連通性要復(fù)雜得多。也許最明智的選擇是使用鄰接矩陣來(lái)表示圖的連通性,因?yàn)樗苋菀妆槐硎緸閺埩?。但?#xff0c;這種表示方式有一些缺點(diǎn)。從示例數(shù)據(jù)集表中,我們可以看到圖中的節(jié)點(diǎn)數(shù)可以達(dá)到數(shù)百萬(wàn)的量級(jí),每個(gè)節(jié)點(diǎn)的邊數(shù)可以是高度可變的。這通常會(huì)導(dǎo)致非常稀疏的鄰接矩陣,這使得空間的存儲(chǔ)效率很低。

Another problem is that there are many adjacency matrices that can encode the same connectivity, and there is no guarantee that these different matrices would produce the same result in a deep neural network (that is to say, they are not permutation invariant).
另一個(gè)問題是,有許多鄰接矩陣在編碼后具有相同的連通性,并且不能保證這些不同的矩陣會(huì)在深度神經(jīng)網(wǎng)絡(luò)中產(chǎn)生相同的結(jié)果(也就是說,它們不是置換不變的)。

Learning permutation invariant operations is an area of recent research.[16] [17]
學(xué)習(xí)置換不變運(yùn)算是最近研究的一個(gè)領(lǐng)域。[16] [17]

For example, the Othello graph from before can be described equivalently with these two adjacency matrices. It can also be described with every other possible permutation of the nodes.
例如,前面的奧賽羅圖可以用以下這兩個(gè)鄰接矩陣等價(jià)地描述。它也可以用所有其他可能的節(jié)點(diǎn)排列來(lái)描述。

表示同一個(gè)圖的兩個(gè)鄰接矩陣


The example below shows every adjacency matrix that can describe this small graph of 4 nodes. This is already a significant number of adjacency matrices–for larger examples like Othello, the number is untenable.
下面的例子展示了每個(gè)可以描述這個(gè)4個(gè)節(jié)點(diǎn)的小圖的鄰接矩陣。這已經(jīng)是相當(dāng)多的鄰接矩陣了——對(duì)于更大的例子,如Othello,這個(gè)數(shù)字是站不住腳的。

所有這些鄰接矩陣都表示同一個(gè)圖


One elegant and memory-efficient way of representing sparse matrices is as adjacency lists. These describe the connectivity of edge eke_kek? between nodes nin_ini? and njn_jnj? as a tuple (i,j) in the k-th entry of an adjacency list. Since we expect the number of edges to be much lower than the number of entries for an adjacency matrix (nnodes2)(n_{nodes}^2)(nnodes2?), we avoid computation and storage on the disconnected parts of the graph.
使用鄰接表是表示稀疏矩陣的一種簡(jiǎn)潔且內(nèi)存效率較高的方法。它們將節(jié)點(diǎn)nin_ini?njn_jnj?之間的邊eke_kek?的連通性描述為鄰接表第k個(gè)條目中的一個(gè)元組(i,j)。由于我們期望邊的數(shù)量比鄰接矩陣(nnodes2)(n_{nodes}^2)(nnodes2?)的條目數(shù)量要少得多,所以我們避免了在圖的非連通部分上的計(jì)算和存儲(chǔ)。

Another way of stating this is with Big-O notation, it is preferable to have O(nedges)O(n_{edges})O(nedges?), rather than O(nnodes2)O(n_{nodes}^2)O(nnodes2?).
另一種表述方式是用大寫O符號(hào),最好是O(nedges)O(n_{edges})O(nedges?),而不是O(nnodes2)O(n_{nodes}^2)O(nnodes2?)

To make this notion concrete, we can see how information in different graphs might be represented under this specification:
為了使這個(gè)概念更具體,我們可以看到不同圖表中的信息在這個(gè)規(guī)則下是如何表示的:

Hover and click on the edges, nodes, and global graph marker to view and change attribute representations. On one side we have a small graph and on the other the information of the graph in a tensor representation.
將鼠標(biāo)懸停并單擊邊緣、節(jié)點(diǎn)和全局圖形標(biāo)記來(lái)查看和更改屬性表示。一邊是一個(gè)小圖,另一邊是張量表示的圖的信息。


It should be noted that the figure uses scalar values per node/edge/global, but most practical tensor representations have vectors per graph attribute. Instead of a node tensor of size [nnodes][n_{nodes}][nnodes?] we will be dealing with node tensors of size [nnodes,nodedim][n_{nodes}, node_{dim}][nnodes?,nodedim?]. Same for the other graph attributes.
值得注意的是,圖在每個(gè)節(jié)點(diǎn)/邊/全局中使用標(biāo)量值,但是大多數(shù)實(shí)際的張量表示在每個(gè)圖屬性中都有向量。我們將處理大小為[nnodes,nodedim][n_{nodes}, node_{dim}][nnodes?,nodedim?]的結(jié)點(diǎn)張量,而不是[nnodes][n_{nodes}][nnodes?]的結(jié)點(diǎn)張量。其他圖屬性也是如此。


參考文獻(xiàn)

[16] Learning Latent Permutations with Gumbel-Sinkhorn Networks Mena, G., Belanger, D., Linderman, S. and Snoek, J., 2018.

[17] Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs Murphy, R.L., Srinivasan, B., Rao, V. and Ribeiro, B., 2018.

總結(jié)

以上是生活随笔為你收集整理的【论文阅读】A Gentle Introduction to Graph Neural Networks [图神经网络入门](4)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。