日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

【论文阅读】A Gentle Introduction to Graph Neural Networks [图神经网络入门](2)

發布時間:2023/12/15 编程问答 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【论文阅读】A Gentle Introduction to Graph Neural Networks [图神经网络入门](2) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

【論文閱讀】A Gentle Introduction to Graph Neural Networks [圖神經網絡入門](2)

Graphs and where to find them
圖以及在哪里找圖

You’re probably already familiar with some types of graph data, such as social networks. However, graphs are an extremely powerful and general representation of data, we will show two types of data that you might not think could be modeled as graphs: images and text. Although counterintuitive, one can learn more about the symmetries and structure of images and text by viewing them as graphs, and build an intuition that will help understand other less grid-like graph data, which we will discuss later.
你可能已經熟悉了某些類型的圖數據,例如社交網絡。然而,圖是一種非常強大并且普遍的數據表示類型,我們將展示兩種你可能認為它們無法建模為圖的類型的數據: 圖像和文本。盡管有悖直覺,但通過將圖像和文本視為圖,可以了解更多關于它們的對稱性和結構的信息,并建立一種概念,有助于理解其他不太像網格的圖數據,我們將在后面討論。
注: 這里的圖像和圖是區分開的,圖像是指圖片或者相片,而這里的圖是指圖結構。

Images as graphs
圖像類型的圖

We typically think of images as rectangular grids with image channels, representing them as arrays (e.g., 244×244×3244×244×3244×244×3 floats). Another way to think of images is as graphs with regular structure, where each pixel represents a node and is connected via an edge to adjacent pixels. Each non-border pixel has exactly 8 neighbors, and the information stored at each node is a 3-dimensional vector representing the RGB value of the pixel.
我們通常認為圖像是帶有圖像通道的矩形網格,用數組表示它們(例如,244×244×3244×244×3244×244×3 浮點數)。另一種將圖像看作具有規則結構的圖,其中每個像素代表一個節點,并通過一條邊與相鄰像素連接。每個無邊界像素正好有8個鄰居,每個節點上存儲的信息是表示像素RGB值的三維向量。

A way of visualizing the connectivity of a graph is through its adjacency matrix. We order the nodes, in this case each of 25 pixels in a simple 5×55×55×5 image of a smiley face, and fill a matrix of nnodes×nnodesn_{nodes}×n_{nodes}nnodes?×nnodes? with an entry if two nodes share an edge. Note that each of these three representations below are different views of the same piece of data.
可視化圖的連通性的一種方法是通過它的鄰接矩陣。我們對節點進行排序,在本例中,在一個簡單的5×55×55×5笑臉圖像中每個節點為25個像素,如果兩個節點共享一條邊,則用一個條目填充一個nnodes×nnodesn_{nodes}×n_{nodes}nnodes?×nnodes?的矩陣。請注意,下面這三種表示都是同一圖數據的不同視圖。

如上圖所示,第一個圖為圖像像素,第二個圖為該圖像的鄰接矩陣,第三個圖為該圖像所對應的圖(結構)。可以看到在鄰接矩陣中,與像素1-1所連接的像素均被標為了藍色,而未與像素1-1連接的像素(包括像素1-1自己)則被標為白色。

Text as graphs
文本類型的圖

We can digitize text by associating indices to each character, word, or token, and representing text as a sequence of these indices. This creates a simple directed graph, where each character or index is a node and is connected via an edge to the node that follows it.
我們可以將索引關聯到每個字符、單詞或標記,并將文本表示為這些索引的序列,從而對文本進行數字化。這里創建了一個簡單的有向圖,其中每個字符或索引都是一個節點,并通過一條邊連接到它后面的節點。


Of course, in practice, this is not usually how text and images are encoded: these graph representations are redundant since all images and all text will have very regular structures. For instance, images have a banded structure in their adjacency matrix because all nodes (pixels) are connected in a grid. The adjacency matrix for text is just a diagonal line, because each word only connects to the prior word, and to the next one.
當然,在實踐中,文本和圖像通常不是這樣編碼的: 這些圖的表示是多余的,因為所有的圖像和文本都有非常規則的結構。例如,圖像在其鄰接矩陣中有一個帶狀結構,因為所有節點(像素)都連接在一個網格中。文本的鄰接矩陣只是一條對角線,因為每個單詞只與前一個單詞連接,并與下一個單詞連接。

This representation (a sequence of character tokens) refers to the way text is often represented in RNNs; other models, such as Transformers, can be considered to view text as a fully connected graph where we learn the relationship between tokens. See more in Graph Attention Networks.
這種表示(字符標記序列)指的是文本通常在RNN中表示的方式;其他模型,如transformer,可以將文本視為一個完全連接的圖,我們可以從中了解標記之間的關系。更多信息見圖注意力網絡。

Graph-valued data in the wild
特殊情況下圖數據的價值

Graphs are a useful tool to describe data you might already be familiar with. Let’s move on to data which is more heterogeneously structured. In these examples, the number of neighbors to each node is variable (as opposed to the fixed neighborhood size of images and text). This data is hard to phrase in any other way besides a graph.
圖是描述你可能已經熟悉的一些數據的非常有用的工具。讓我們來看看結構更加特殊的數據。在這些示例中,每個節點的鄰居數量是可變的(與圖像和文本的鄰居的固定性相反)。除了圖,我們很難用其他任何方式來表達以下這示例中的數據。

Molecules as graphs. Molecules are the building blocks of matter, and are built of atoms and electrons in 3D space. All particles are interacting, but when a pair of atoms are stuck in a stable distance from each other, we say they share a covalent bond. Different pairs of atoms and bonds have different distances (e.g. single-bonds, double-bonds). It’s a very convenient and common abstraction to describe this 3D object as a graph, where nodes are atoms and edges are covalent bonds[8]. Here are two common molecules, and their associated graphs.
分子類型的圖。分子是物質的基石,是由三維空間中的原子和電子構成的。所有的粒子都在相互作用,但當一對原子彼此之間保持穩定的距離時,我們說它們共享一個共價鍵。不同的原子和鍵對有不同的距離(如單鍵、雙鍵)。將這個3D對象描述為圖形是一種非常方便和常見的抽象方法,其中節點是原子,邊是共價鍵[8]。這是兩種常見的分子及其相關圖形。

(左)香茅醛分子的三維表示(中)分子中鍵的鄰接矩陣(右)分子的圖形表示。


(左)咖啡因分子的三維表示(中)分子中鍵的鄰接矩陣(右)分子的圖形表示。


Social networks as graphs. Social networks are tools to study patterns in collective behaviour of people, institutions and organizations. We can build a graph representing groups of people by modelling individuals as nodes, and their relationships as edges.
社交網絡類型的圖。社交網絡是研究人們、機構和組織的集體行為模式的工具。我們可以通過將個體建模為節點,將他們的關系建模為邊,來構建一個表示人群的圖。

(左)戲劇《奧賽羅》中的一幕。(中)劇中角色之間互動的鄰接矩陣。(右)這些交互的圖示。


與圖像和文本數據不同,社交網絡沒有完全相同的鄰接矩陣。

(左)空手道比賽圖像。(中)空手道俱樂部中人們之間互動的鄰接矩陣。(右)這些交互的圖示。


Citation networks as graphs. Scientists routinely cite other scientists’ work when publishing papers. We can visualize these networks of citations as a graph, where each paper is a node, and each directed edge is a citation between one paper and another. Additionally, we can add information about each paper into each node, such as a word embedding of the abstract. (see [9], [10], [11]).
引文網絡類型的圖。科研工作者在發表論文時經常引用其他科研工作者的研究成果。我們可以將這些引用網絡形象化為一個圖,其中每一篇論文都是一個節點,每一條有向邊都是一篇論文和另一篇論文之間的引用。此外,我們可以在每個節點中添加關于每篇論文的信息,例如摘要的關鍵詞嵌入。(詳見 [9], [10], [11])。

Other examples. In computer vision, we sometimes want to tag objects in visual scenes. We can then build graphs by treating these objects as nodes, and their relationships as edges. Machine learning models, programming code[12] and math equations[13] can also be phrased as graphs, where the variables are nodes, and edges are operations that have these variables as input and output. You might see the term “dataflow graph” used in some of these contexts.
其他類型的例子。在計算機視覺中,我們有時希望在視覺場景中標記對象。然后,我們可以通過將這些對象視為節點,將它們的關系視為邊來構建圖。機器學習模型、編程代碼[12]和數學方程[13] 也可以用圖來表達,其中變量是節點,邊是將這些變量作為輸入和輸出的操作。你可能會在這些上下文中看到術語“數據流圖”。

The structure of real-world graphs can vary greatly between different types of data?—?some graphs have many nodes with few connections between them, or vice versa. Graph datasets can vary widely (both within a given dataset, and between datasets) in terms of the number of nodes, edges, and the connectivity of nodes.
實際生活中圖的結構在不同類型的數據之間可能有很大的差異——一些圖有許多節點,但它們之間的連接很少,反之亦然。圖數據集在節點、邊的數量和節點的連接性方面可能有很大的差異(在給定的數據集內和數據集之間)。

在實際生活中發現的圖的匯總統計信息。數字取決于特征決策。更多有用的統計數據和圖表可以在KONECT[14]中找到。


參考文獻

[8] Convolutional Networks on Graphs for Learning Molecular Fingerprints Duvenaud, D., Maclaurin, D., Aguilera-Iparraguirre, J., Gomez-Bombarelli, R., Hirzel, T., Aspuru-Guzik, A. and Adams, R.P., 2015.

[9] Distributed Representations of Words and Phrases and their Compositionality Mikolov, T., Sutskever, I., Chen, K., Corrado, G. and Dean, J., 2013.

[10] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Devlin, J., Chang, M., Lee, K. and Toutanova, K., 2018.

[11] Glove: Global Vectors for Word Representation Pennington, J., Socher, R. and Manning, C., 2014. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP).

[12] Learning to Represent Programs with Graphs Allamanis, M., Brockschmidt, M. and Khademi, M., 2017.

[13] Deep Learning for Symbolic Mathematics Lample, G. and Charton, F., 2019.

[14] KONECT Kunegis, J., 2013. Proceedings of the 22nd International Conference on World Wide Web - WWW '13 Companion.

總結

以上是生活随笔為你收集整理的【论文阅读】A Gentle Introduction to Graph Neural Networks [图神经网络入门](2)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。