Spectral Networks and Deep Locally Connected Networks on Graphs

Motivation

Generalize ConvNet to graphs.

Spatial Construction

Consider an undirected weighted graph denoted by G=(V,W)G=(V, W) where V={1,,V}V=\{1,\cdots, |V|\} and WRV×VW\in\mathbb{R}^{|V|\times|V|} is a symmetric nonnegative matrix.

Locality via W

The weights in a graph determine a notion of locality like the one in images, audio and text. We can restrict attention to sparse "filters" with receptive fields given by neighborhoods, thus reducing the number of parameters in a filter to O(Sn)O(S\cdot n) where SS is the average neighborhood size and nn is the input feature size.

Multiresolution Analysis in Graphs

The authors mentioned that "[f]inding multiscale clusterings that are provably guaranteed to behave well w.r.t. Laplacian on the graph is still an open area of research" (the publication was in 2014).

Deep Locally Connected Networks

The spatial construction starts with a multiscale clustering of the graph. We consider KK scales. We set V0=VV_{0}=V, and for each k=1,,Kk=1,\cdots, K, we define VkV_{k} a partition of Vk1V_{k-1} into dkd_k clusters; and a collection of neighborhoods around each element of Vk1V_{k-1}:

Nk={Nk,i;i=1,,dk1}.\begin{equation} \mathcal{N}_{k}=\{\mathcal{N}_{k,i}; i=1,\cdots, d_{k-1}\}. \end{equation}

With these in hand, we can now define the kk-th layer of the network as a mapping: Rdk1×fk1Rdk×fk\mathbb{R}^{d_{k-1}\times f_{k-1}}\rightarrow \mathbb{R}^{d_k\times f_k}, xkxk+1x_{k}\mapsto x_{k+1} defined as

xk+1,j=Lkh(i=1fk1Fk,i,jxk,i),\begin{equation} x_{k+1, j}=L_{k}h\left(\sum_{i=1}^{f_{k}-1}F_{k,i,j}x_{k, i}\right), \end{equation}

where:

  • xk+1,jx_{k+1,j} is the jj th column of xk+1x_{k+1} and xk,ix_{k, i} is the ii th column of xkx_k

  • Fk,i,jF_{k, i, j} is a dk1×dk1d_{k-1}\times d_{k-1} sparse matrix with nonzero entries in the locations given by Nk\mathcal{N}_{k}

  • LkL_k outputs the result of a pooling operation over each cluster in VkV_k.

This construction is illustrated below:

Started with W0=WW_0=W, for all kKk\leq K, VkV_k is constructed by merging all nodes whose pairwise distance (determined by the corresponding entry in Wk1W_{k-1}) are less than ε\varepsilon. WkW_k and Nk\mathcal{N}_{k} are then obtained with

Ak(i,j)=sVk(i)tVk(j)Wk1(s,t),Wk=rownormalize(Ak),Nk=supp(Wk),\begin{align} A_{k}(i, j)&=\sum_{s\in V_k(i)}\sum_{t\in V_{k}(j)}W_{k-1}(s, t),\\ W_{k}&=\text{rownormalize}(A_k),\\ \mathcal{N}_{k}&=\text{supp}(W_k),\\ \end{align}

where Vk(i)V_{k}(i) is the ii th node in VkV_k, supp(Wk)\text{supp}(W_k) is the support of WkW_k, i.e. {(i,j):Wk(i,j)>0}\{(i,j):W_k(i,j)>0\}.

Spectral Construction

The global structure of the graph can be exploited with the spectrum of its graph-Laplacian to generalize the convolution operator.

Harmonic Analysis on Weighted Graphs

The frequency and smoothness relative to WW are interrelated through the combinatorial Laplacian L=DWL=D-W or graph Laplacian L=ID12WD12\mathcal{L}=I-D^{-\frac{1}{2}}WD^{-\frac{1}{2}}. For simplicity, the authors simply use the combinatorial Laplacian.

If xx is an mm-dimensional vector, a natural definition of the smoothness functional xW2||\nabla x||_{W}^{2} at a node ii is

xW2(i)=jWij(x(i)x(j))2,xW2=ijWij(x(i)x(j))2=2xTΔx,\begin{align} ||\nabla x||_{W}^{2}(i)&=\sum_{j}W_{ij}\left(x(i)-x(j)\right)^2,\\ ||\nabla x||_{W}^{2}&=\sum_{i}\sum_{j}W_{ij}\left(x(i)-x(j)\right)^2=2x^{T}\Delta x, \end{align}

This definition is also known as the Dirichlet energy. With this definition, the smoothest vector is a constant:

ϕ0=argminxRm,x=1xW2=1m1m.\begin{equation} \phi_0=\text{argmin}_{x\in\mathbb{R}^{m}, ||x||=1} ||\nabla x||_{W}^{2}=\frac{1}{\sqrt{m}}\textbf{1}_{m}. \end{equation}

Note here ϕ0\phi_0 is also the eigenvector of LL corresponding to eigenvalue 00.

Each successive

ϕi=argminxRm,x=1,x{v0,,vi1}xW2\begin{equation} \phi_i = \text{argmin}_{x\in\mathbb{R}^{m}, ||x||=1, x\perp\{v_0,\cdots,v_{i-1}\}}||\nabla x||_{W}^{2} \end{equation}

is an eigenvalue of LL, and the eigenvalues λi\lambda_i allow the smoothness of a vector xx to be read off from the coefficients of xx in [ϕ0,,ϕm1][\phi_0,\cdots, \phi_{m-1}], equivalently as the Fourier coefficients of a signal defined in a grid. The filters are therefore multipliers on the eigenvalues of Δ\Delta and reduces the number of parameters of a filter from m2m^2 to mm.

Two related results are: 1. Functions that are smooth relative to the grid metric have coefficients with quick decay in the basis of eigenvectors of Δ\Delta. 2. The eigenvectors of the subsampled Laplacian are the low frequency eigenvectors of Δ\Delta.

Extending Convolutions via the Laplacian Spectrum

Theoretical Formulation

Given two functions f,g:ΩRf,g:\Omega\rightarrow\mathbb{R}, Ω\Omega a Euclidean domain, the convolution of ff and gg is defined to be

(fg)(t)=Ωf(s)g(ts)ds,\begin{equation} (f\star g)(t)=\int_{\Omega}f(s)g(t-s)ds, \end{equation}

where Ωf(s)g(ts)ds<\int_{\Omega}|f(s)g(t-s)|ds<\infty.

A famous result related to convolution is the convolution theorem: fg^=f^g^\widehat{f\star g}=\hat{f}\cdot\hat{g}, where ^\hat{} denotes the fourier transform of a function and \cdot denotes the elementwise multiplication.

The fourier transform of fL1(R)f\in L^{1}(\mathbb{R}) is defined by

f^(x)=Rf(y)eixydy\hat{f}(x)=\int_{\mathbb{R}}f(y)e^{-ixy}dy

The description above is for the Euclidean case. To generalize the notion of convolution to non-Euclidean case, one possibility is by using the convolution theorem as a definition.

Since Δ\Delta admits on a compact domain an eigendecomposition with a discrete set of orthonormal eigenfunctions ϕ0,ϕ1,\phi_0,\phi_1,\cdots and they are the smoothest functions in the sense of the Dirichlet energy, they can be interpreted as a generalization of the standard Fourier basis.

We can then define the convolution of ff and gg on a graph as

(fg)(x)=i0f,ϕiL2(X)g,ϕiL2(X)ϕi(x),(f\star g)(x)=\sum_{i\geq 0} \langle f, \phi_i\rangle_{L^{2}(\mathcal{X})}\langle g, \phi_i\rangle_{L^{2}(\mathcal{X})}\phi_i(x),

where X\mathcal{X} is a graph.

For two vectors f=(f1,,fn)T\textbf{f}=(f_1,\cdots, f_n)^{T} and g=(g1,,gn)T\textbf{g}=(g_1,\cdots, g_n)^{T}, the convolution of f\textbf{f} and g\textbf{g} can be calculated by

fg=[gngn1g1g1gngn1gn1gn2gn][f1fn]=Φ(ΦTg)(ΦTf)=Φdiag(g^1,,g^n)ΦTf,\begin{align} f \star g&= \begin{bmatrix} g_{n} & g_{n-1} & \cdots & g_{1}\\ g_{1} & g_{n} & \cdots & g_{n-1}\\ \vdots & \vdots & \ddots & \vdots\\ g_{n-1} & g_{n-2} & \cdots & g_{n}\\ \end{bmatrix} \begin{bmatrix} f_1\\ \vdots\\ f_n \end{bmatrix}\\ &=\Phi(\Phi^{T}g)\circ(\Phi^{T}f)\\ &=\Phi\text{diag}(\hat{g}_1,\cdots, \hat{g}_n)\Phi^{T}f, \end{align}

where Φ=(ϕ1,,ϕn)\Phi=(\phi_1,\cdots, \phi_n). Note that this formulation depends on the choice of Φ\Phi.

Practice

Let Φ\Phi be the eigenvectors of LL, ordered by eigenvalue. A convolutional layer transforming node features with RΩ×fk1RΩ×fk\mathbb{R}^{|\Omega|\times f_{k-1}}\rightarrow\mathbb{R}^{|\Omega|\times f_{k}}, xkxk+1x_{k}\mapsto x_{k+1} can be defined as

xk+1,j=h(i=1fk1ΦFk,i,jΦTxk,i)=h(Φi=1fk1Fk,i,jΦTxk,i),j=1,,fk,x_{k+1, j}=h\left(\sum_{i=1}^{f_{k-1}}\Phi F_{k,i,j}\Phi^{T}x_{k, i}\right)=h\left(\Phi \sum_{i=1}^{f_{k-1}} F_{k,i,j}\Phi^{T}x_{k, i}\right), j=1,\cdots, f_k,

where Fk,i,jF_{k,i,j} is a diagonal matrix and hh is a real-valued nonlinearity. Note with our previous theory the update rule corresponds to fk×fk1f_{k}\times f_{k-1} filters and fk×fk1×Ωf_{k}\times f_{k-1}\times |\Omega| parameters.

Often only the first dd eigenvectors of the Laplacian are useful, which carry the smooth geometry of the graph. We can treat dd as a hyperparameter and replace Φ\Phi by Φd\Phi_d in the equation above, which only keeps the first dd columns of Φ\Phi.

This construction can suffer when the individual high frequency eigenvectors are not meaningful but a cohort of high frequency eigenvectors contain meaningful information. It is also not obvious how to do either the forwardprop or the backprop efficiently while applying the nonlinearity on the space side.

The construction above requires O(Ω)O(|\Omega|) parameters. One possibility to use O(1)O(1) parameters instead is to employ

diag(Fk,i,j)=Kαk,i,j,\text{diag}(F_{k,i,j})=\mathcal{K}\alpha_{k,i,j},

where K\mathcal{K} is a d×qkd\times q_k fixed qubic spline kernel and αk,i,j\alpha_{k,i,j} spline coefficients. The theory for this part seems to be ambiguous.

Last updated