[PDF] Unifying Graph Convolutional Neural Networks and Label Propagation





Previous PDF Next PDF



Graph-based Label Propagation for Semi-Supervised Speaker

their semi-supervised variants based on pseudo-labels. Index Terms: semi-supervised learning speaker recognition



Unifying Graph Convolutional Neural Networks and Label Propagation

17 lut 2020 Both solve the task of node classification but LPA propagates node label information across the edges of the graph while GCN propagates and ...



Graphes étiquetés

Pour accéder à sa messagerie Antoine a choisi un code qui doit être reconnu par le graphe étiqueté suivant les sommets 1-2-3-4. Une succession des lettres 



General Partial Label Learning via Dual Bipartite Graph Autoencoder

9 wrz 2021 We propose a novel graph neural networks called DB-. GAE which aims to disambiguate and predict instance- label links within and across groups.



NeMa: Fast Graph Search with Label Similarity

structure and node labels thus bringing challenges to the graph querying tasks. approximately) isomorphic to the query graph in terms of label and.



Multi-Label Classification with Label Graph Superimposing

21 lis 2019 Recently graph convolution network. (GCN) is leveraged to boost the performance of multi-label recognition. However



Dynamic Label Graph Matching for Unsupervised Video Re

camera variations this paper propose a dynamic graph matching (DGM) method. DGM iteratively updates the image graph and the label estimation process by 



LiGCN: Label-interpretable Graph Convolutional Networks for Multi

15 lip 2022 LiGCN: Label-interpretable Graph Convolutional Networks for Multi-label Text Classification. Irene Li1 Aosong Feng1



Jointly Learning Explainable Rules for Recommendation with

9 mar 2019 First we build a heterogeneous graph from items and a knowledge graph. The rule learning module learns the importance of rules and the ...





Graphes étiquetés - Meilleur en Maths

Un graphe étiqueté est un graphe où chacune des arêtes est affectée d'un symbole (par exemple ou un mot ou un nombre ou # ou & ) 2 Exemple Un exemple de graphe étiqueté pour déterminer des codes d'accès On veut déterminer des codes de 4 lettres Exemple de codes obtenus empt eoru 3 Exercice



Les graphes - univ-reunionfr

graphe; - conditions d’existence de chaînes et cycles eulériens; - exemples de convergence pour des graphes probabilistes à deux sommets pondérés par des probabilités On pourra dans des cas élémen-taires interpréter les termes de la puissance ne de la matrice associée à un graphe



Graphes pondérés graphes probabilistes - TuxFamily

Ungraphe étiquetéest un graphe dont les arêtes sont munies d’uneétiquette Uneétiquette est un nombre une lettre un mot (ensemble de lettres) un symbole ? Le plus souvent un graphe étiqueté est orienté On peut alors dé?nir un sommet «départ» et un sommet «?n»



Graphes étiquetés et chemin le plus court A) Graphe étiqueté

La plupart du temps un graphe étiqueté est orienté Un graphe étiqueté contient un sommet appelé début ou départ du graphe étiqueté et un sommet final appelé fin Pour connaître le nombre de « mots » de longueur reconnus par un graphe étiqueté on calcule ???? où est la matrice d'adjacence de ce graphe Exemple :

Quels sont les graphes et étiquettes?

Graphes et étiquettes 7.a Graphes étiquetés Les graphes étiquetés, ou automates, ont donné lieu depuis une cinquantaine d’années à une théorie mathé- matique abstraite, riche et diversi?ée, possédant de nombreuses applications. On appellegraphe étiquetéun graphe où toutes les arêtes portent une étiquette (lettre, mot, nombre, symbole, code,...).

Quel est le rôle d'un graphe?

De manière générale, un graphe permet de représenter des objets ainsi que les relations entre ses éléments (par exemple réseau de communication, réseaux routiers, interaction de diverses espèces animales, circuits électriques...)

Quelle est l’histoire de la théorie des graphes?

L’histoire de la théorie des graphes débuterait avec les travaux d’Euler au 18esiècle et trouve son origine dans l’étude de certains problèmes, tels que celui des ponts de Königsberg, la marche du cavalier sur l’échiquier ou le problème du coloriage de cartes et du plus court trajet entre deux points.

Qu'est-ce que le graphe et la couleur?

Graphes et couleurs 5.a Dé?nition Colorerun graphe, c’est associer une couleur à chaque sommet de façon que deux sommets adjacents soient colorés avec des couleurs di?érentes. Dé?nition 1. Remarque 2

Unifying Graph Convolutional Neural Networks and Label Propagation

Hongwei Wang

1Jure Leskovec1

AbstractLabel Propagation (LPA) and Graph Convolu- tional Neural Networks (GCN) are both message passing algorithms on graphs. Both solve the task of node classification but LPA propagates node label information across the edges of the graph, while GCN propagates and transforms node feature information. However, while concep- tually similar, theoretical relation between LPA and GCN has not yet been investigated. Here we study the relationship between LPA and GCN in terms of two aspects: (1)feature/label smooth- ingwhere we analyze how the feature/label of one node is spread over its neighbors; And, (2) feature/label influenceof how much the initial feature/label of one node influences the final fea- ture/label of another node. Based on our theo- retical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.

In our unified model, edge weights are learnable,

and the LPA serves as regularization to assist the

GCN in learning proper edge weights that lead to

improved classification performance. Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models. In a number of experiments on real-world graphs, our model shows superiority over state-of-the-art

GCN-based methods in terms of node classifica-

tion accuracy.

1. Introduction

Consider the problem of node classification in a graph, where the goal is to learn a mappingM:V ! Lfrom node setVto label setL. Solution to this problem is widely applicable to various scenarios, e.g., inferring income of users in a social network or classifying scientific articles in a citation network. Different from a generic machine1 Computer Science Department, Stanford University, Stanford, CA 94305, United States. Correspondence to: Hongwei Wang . learning problem where samples are independent from each other, nodes are connected by edges in the graph, which provide additional information and require more delicate modeling. To capture the graph information, researchers have mainly designed models on the assumption that labels and features varysmoothlyover the edges of the graph. In particular, on the label sideL, node labels are propagated and aggregated along edges in the graph, which is known as Label Propagation Algorithm(LPA) (Zhu et al.,2005 ;Zhou et al. 2004

Zhang & Lee

2007

W ang& Zhang

2008

Karasuyama & Mamitsuka

2013

Gong et al.

2017
Liu et al. 2019a
); On the node sideV, node features are propa- gated along edges and transformed through neural network layers, which is known asGraph Convolutional Neural Net- works(GCN) (Kipf & Welling,2017 ;Hamilton et al. ,2017 ;

Li et al.

2018

Xu et al.

2018

Liao et al.

2019

Xu et al.

2019

Qu et al.

2019
GCN and LPA are related in that they propagate features and labels on the two sides of the mappingM, respectively. However, the relationship between GCN and LPA has not yet been investigated. Specifically, what is the theoretical relationship between GCN and LPA, and how can they be combined to develop a more accurate model for node classi- fication in graphs? Here we study the theoretical relationship between GCN and LPA from two viewpoints: (1)Feature/label smooth- ing, where we show that the intuition behind GCN/LPA is smoothing features/labels of nodes across the edges of the graph, i.e., one node"s feature/label equals the weighted average of features/labels of its neighbors. We prove that if the weights of edges in a graph smooth the node fea- tures with high precision, they also smooth the node labels with guaranteed upper bound on the smoothing error. And, (2)feature/label influence, where we quantify how much the initial feature/label of nodevbinfluences the output feature/label of nodevain GCN/LPA by studying the Jaco- bian/gradient of nodevbwith respect to nodeva. We also prove the quantitative relationship between feature influence and label influence. Based on the above theoretical analysis, we propose a uni- fied model GCN-LPA for node classification. We show that the key to improving the performance of GCN is to enable nodes within the same class/label to connect more stronglyarXiv:2002.06755v1 [cs.LG] 17 Feb 2020

Unifying Graph Convolutional Neural Networks and Label Propagationwith each other by making edge weights/strengths trainable.

Then we prove that increasing the strength of edges between the nodes of the same class is equivalent to increasing the accuracy of LPA"s predictions. Therefore, we can first learn the optimal edge weights by minimizing the loss of pre- dictions in LPA, then plug the optimal edge weights into a GCN to learn node representations and do final classifi- cation. In GCN-LPA, we further combine the two steps together and train the whole model in an end-to-end fashion, where the LPA part serves as regularization to assist the GCN part in learning proper edge weights that benefit the separation of different node classes. It is worth noticing that GCN-LPA can also be seen as learningattention weights for edges based onnode label information, which requires less handcrafting and is more task-oriented than existing work that learns attention weights based onnode feature similarity(Velickovi´c et al.,2018 ;Thekumparampil et al. , 2018

Zhang et al.

2018

Liu et al.

2019b
We conduct extensive experiments on five datasets, and the results indicate that our model outperforms state-of- the-art methods in terms of classification accuracy. The experimental results also show that combining GCN and LPA together is able to learn more informative edge weights thereby leading to better performance.

2. Unifying GCN and LPA

In this section, we first formulate the node classification problem and briefly introduce LPA and GCN. We then prove their relationship from the viewpoints of smoothing and influence. Based on the theoretical findings, we propose a unified model GCN-LPA, and analyze why our model is theoretically superior to vanilla GCN.

2.1. Problem Formulation and Preliminaries

We begin by describing the problem of node classification on graphs and introducing notation. Consider a graphG= (V;A;X;Y) , whereV=fv1;;vngis the set of nodes, A2Rnnis the adjacency matrix (self-loops are included), (theij-th entry ofA) is the weight of the edge connecting viandvj.N(v)denotes the set of immediate neighbors of nodevin graphG. Each nodevihas a feature vectorxi which is thei-th row ofX, while only the firstmnodes have labelsy1;;ymfrom a label setL=f1;;cg.

The goal is to learn a mappingM:V ! Land predict

labels of unlabeled nodes.

Label Propagation Algorithm

. LPA assumes that two connected nodes are likely to have the same label, and thus it propagates labels iteratively along the edges. Let

Y(k)= [y(k)

1;;y(k)n]>2Rncbe the soft label matrix

in iterationk >0, in which thei-th rowy(k)> idenotes the predicted label distribution for nodeviin iterationk. When k= 0, the initial label matrixY(0)= [y(0)

1;;y(0)n]>con-

sists of one-hot label indicator vectorsy(0) ifori= 1;;m (i.e., labeled nodes) or zero vectors otherwise (i.e., unlabeled nodes). LetDbe the diagonal degree matrix forAwith en- triesdii=P jaij. Then LPA (Zhu et al.,2005 ) in iteration kis formulated as the following two steps: Y (k+1)=D1A Y(k);(1) y (k+1) i=y(0) i;8im:(2)

In Eq. (

1 ), all nodes propagate labels to their neighbors according to normalized edge weights. Then in Eq. ( 2 labels of all labeled nodes are reset to their initial values, because LPA wants to persist labels of nodes which are labeled so that unlabeled nodes do not overpower the labeled ones as the initial labels would otherwise fade away.

Graph Convolutional Neural Network

. GCN is a multi-layer feedforward neural network that propagates and transforms node features across the graph. The layer-wise propagation rule of GCN isX(k+1)= (D12 AD12

X(k)W(k))

, whereW(k)is trainable weight matrix in thek-th layer,()is an activation function such asReLU, andX(k)= [x(k)

1;;x(k)n]>are thek-th layer

node representations withX(0)=X. To align with the above LPA, we useD1Aas the normalized adjacency ma- trix instead of the symmetric oneD12

AD12proposed by

Kipf & Welling

2017
). Therefore, the feature propagation scheme of GCN in layerkis: X (k+1)=

D1AX(k)W(k)

:(3)

Notice similarity between Eqs. (

1 ) and ( 3 ). Next we shall study and uncover the relationship between the two equa- tions.

2.2. Feature Smoothing and Label Smoothing

The intuition behind both LPA and GCN issmoothing(Zhu et al. 2003

Li et al.

2018
): In LPA, the final label of a node is the weighted average of labels of its neighbors: y (1) i=1d iiX j2N(i)a ijy(1) j:(4) In GCN, the final node representation is also the weighted average of representations of its neighbors if we assume is identity function andW()are identity matrices: x (1) i=1d iiX j2N(i)a ijx(1) j:(5) Next we show the relationship between feature smoothing and label smoothing: Unifying Graph Convolutional Neural Networks and Label Propagation

Theorem 1

(Relationship between featur esmoothing and label smoothing)Suppose that the latent ground-truth mappingM:x!yfrom node features to node labels is differentiable and satisfiesL-Lipschitz constraint, i.e., jM(x1) M(x2)j Lkx1x2k2for anyx1andx2(L is a constant). If the edge weightsfaijgapproximately smoothxiover its immediate neighbors with errori, i.e., x i=1d iiX j2N(i)a ijxj+i;(6) then the edge weightsfaijgalso approximately smoothyi over its immediate neighbors with the following approxima- tion error: yi1d iiX j2N(i)a ijyjLkik2+omaxj2N(i)(kxjxik2); (7) whereo()denotes a higher order infinitesimal than.

Proof of Theorem

1 is in App endix A . Theorem 1 indicates that label smoothing is theoretically guaranteed by feature smoothing. Note that if we treat edge weightsfaijglearn- able, then feature smoothing (i.e.,i!0) can be directly achieved by keeping node featuresxifixed while setting faijgappropriately, without resorting to feature propaga- tion in a multi-layer GCN. Therefore, a simple approach to exploit this theorem would be to learnfaijgby recon- structing node featurexifrom its neighbors, then use the learnedfaijgto reconstruct node labelsyi(Karasuyama &

Mamitsuka

2013

As shown in Theorem

1 , the approximation error of labelsquotesdbs_dbs44.pdfusesText_44
[PDF] una marcha por los derechos de los indigenas comprension escrita

[PDF] aire sous la courbe physique

[PDF] aire sous la courbe calcul

[PDF] aire sous la courbe alloprof

[PDF] methode analyse de doc histoire

[PDF] libreoffice diagramme pourcentage

[PDF] diagramme calc

[PDF] comment faire un graphique ligne sur libreoffice calc

[PDF] libreoffice graphique croisé dynamique