An Experimental Assessment of the Stability of Graph Contrastive Learning

More Info
expand_more

Abstract

The Deep Neural Network (DNN) has become a widely popular machine learning architecture thanks to its ability to learn complex behaviors from data. Standard learning strategies for DNNs however rely on the availability of large, labeled datasets. Self-Supervised Learning (SSL) is a style of learning that allows models to also use unlabeled data for training, which is typically much more abundant.
SSL is being applied many different data domains such as images and natural language. One such a domain is the domain of graph data. A graph is a data structure describing a network of nodes connected by edges. Graphs are a natural way of presenting many forms of data such as molecules, social networks, and 3D meshes.
The style of SSL that has found the most success on graphs is Contrastive Learning (CL). In CL, an encoder is trained to produce semantically rich representations from unlabeled input data by smartly separating task-relevant information in the input from task-irrelevant information. The encoder backbone most commonly used for Graph Contrastive Learning (GCL) is the Graph Convolutional Neural Network (GCNN).
While GCNNs are the state of the art on many graph data tasks, they suffer from underfitting when made too deep. This is especially a problem for GCL as it prevents encoder complexity to scale with the large availability of unlabeled data.
In this thesis, we investigate this underfitting behavior through the lens of GCNN stability. Stability refers to a model's ability to continue producing consistent outputs, even when its inputs are perturbed slightly. Theoretical work has shown that stability guarantees for GCNNs weaken when their complexity is increased. We confirm experimentally that, in many cases, GCNNs indeed grow less stable when made more complex. This a relevant finding given that learning stable representations is a prerequisite to CL. Additionally, we show in our experiments that, even when trained using CL, stability discrepancies between different GCNN architectures do not disappear. This, in turn, suggests that GCNN architectures with poorer stability may also produce poorer representations. We confirm experimentally that, on at least one dataset, poor stability as a result of architectural complexity can indeed be correlated to a degradation in representation quality.
With this result we provide an additional explanation as to why deeper GCNNs are often found to perform worse in GCL settings. These insights can, in turn, motivate the design of model architectures for GCL that do not suffer from this trade-off between complexity and representation quality.

Files

MSc_Thesis_Siert.pdf
(pdf | 15.7 Mb)
Unknown license