DiffPool
Motivation
Current GNN methods are inherently flat and do not learn hierarchical representations of graphs - a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph.
Contribution
The authors propose DIFFPOOL, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GCNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer.
Idea
Given , denote its adjacency matrix by and its initial node feature matrix by . We want to learn a mapping from a set of labeled graphs , where is the set of all possible graphs and is the set of all possible graph labels.
Start with , we repeatedly perform the following update rules for times:
where:
is considered as a soft assignment matrix assigning nodes in layer to clusters in layer . Note that 's for are hyperparameters.
are two separate GNNs.
is a vector of 's, i.e., all nodes at the final layer are assigned to a single cluster.
is a final embedding vector corresponding to the entire graph and can be passed to a differentiable classifier.
The authors consider the last two equations as a layer: .
Each round of performing the equations can be viewed as a round of graph coarsening, until . Below is a nice visualization of rounds of coarsening:
Elementwise, the last two equations above correspond to
Permutation Invariance: The authors present a nice result that any deep GNN model based on is permutation invariant, as long as the component GNNs are permutation invariant.
Proposition 1. Let be any permutation matrix, then as long as .
I actually did not understand why this claim is True and has not been persuaded by the authors' claim.
Auxiliary Link Prediction Objective and Entropy Regularization
In practice, it can be difficult to train the pooling GNN using only gradient signal from the graph classification task. To alleviate this issue, the authors train the pooling GNN with an auxiliary link prediction objective, which encodes the intuition that nearby nodes should be pooled together. In particular, at each layer , the authors minimize . Note that , corresponds to the inner product between the cluster assignment for node and . Intuitively, the larger is, the stronger the connectivity between the nodes, the similar their cluster assignment should be.
The authors also believe that the output cluster assignment for each node should generally be close to a one-hot vector, so that the membership for each cluster or subgraph is clearly defined. The authors therefore regularize the entropy of the cluster assignment by minimizing , where denotes the entropy function and is the -th row of .
During training, and from each layer are added to the classification loss, which yields a longer time to converge as well as a better performance and more interpretable cluster assignments.
Experiments
The authors evaluate the benefits of against a number of state-of-the-art graph classification approaches, with the goal of answering the following questions: 1. How does DIFFPOOL compare to other pooling methods proposed for GNNs (e.g., using sort pooling or the SET2SET method)? 2. How does DIFFPOOL combined with GNNs compare to the state-of-the-art for graph classification task, including both GNNs and kernel-based methods? 3. Does DIFFPOOL compute meaningful and interpretable clusters on the input graphs?
Datasets
ENZYMES, PROTEINS, D&D, the social network dataset REDDIT-MULTI-12K, and the scientific collaboration dataset COLLAB.
% of the graphs are sampled to use as a validation set, with the remaining graphs used to perform -fold cross-validation to evaluate model performance.
Model configurations
GNN model: "mean" variant of GRAPHSAGE, which gives better performance than GCN
A DIFFPOOL layer is applied after every two GRAPHSAGE layers. A total of DIFFPOOL layers are used for the datasets. The number of clusters is set as % of the number of nodes before applying DIFFPOOL.
For small datasets, one DIFFPOOL layer is enough. layers of graph convolutions are performed after the DIFFPOOL layer. The number of clusters is set as % of the number of nodes before applying DIFFPOOL.
Batch Normalization is applied after every layer of GRAPHSAGE.
Adding an normalization to the node embeddings at each layer made the training more stable.
All models are trained for 3000 epochs with early stopping applied when the validation loss starts to drop.
Two model variants are considered
DIFFPOOL-DET, a DIFFPOOL model where assignment matrices are generated using a deterministic graph clustering algorithm
DIFFPOOL-NOLP, a variant where the link prediction side objective is turned off
Performance Statistics
The other GNN based models for performance comparison include:
GRAPHSAGE with global mean-pooling
Edge-conditioned filters in CNN for graphs (ECC) incorporates edge information into the GCN model and performs pooling using coarsening algorithm
PATCHYSAN defines a receptive field for each node and use a canonical node ordering, applies convolutions on linear sequences of node embeddings
SET2SET replaces global mean-pooling by the aggregation used in SET2SET. GRAPHSAGE is used as the base GNN model.
SORTPOOL applies a GNN architecture and performs a single layer of soft pooling followed by 1D convolution on sorted node embeddings
Differentiable Pooling on STRUCTURE2VEC
The authors also tested applying DIFFPOOL on Structure2Vec (S2V), a state-of-the-art graph representation learning algorithm that combines a latent variable model with GNNs and uses global mean pooling.
Analysis of Cluster Assignment in DIFFPOOL
Hierarchical cluster structure. In figure we can find a visualization of cluster assignment. Node cluster membership is determined by taking the of its cluster assignment probabilities. The authors observe that even when learning cluster assignment based solely on the graph classification objective, DIFFPOOL can still capture the hierarchical community structure. They also observe significant improvement in membership assignment quality with link prediction auxiliary objectives.
Dense vs. sparse subgraph structure. The authors observe that DIFFPOOL tends to collapse densely-connected subgraphs into clusters.
Assignment for nodes with similar representations. This is expected based on the update rule. The authors also observe that auxiliary link prediction helps discouraging nodes that are far away to be pooled together.
Sensitivity of the Pre-defined Maximum Number of Clusters. The assignment varies according to the depth of the network and the max number of clusters. Larger helps modeling more complex structure but also brings noise and is less efficiency. When there are too many clusters, it's possible that all nodes have low values for these clusters in the assignment matrix.
Last updated