Learned equivariance in Convolutional Neural Networks

More Info
expand_more

Abstract

Aside from developing methods to embed the equivariant priors into the architectures, one can also study how the networks learn equivariant properties. In this work, we conduct a study on the influence of different factors on learned equivariance. We propose a method to quantify equivariance and argue why using the correlation to compare intermediate representations may be a better choice as opposed to other commonly used metrics. We show that imposing equivariance or invariance into the objective function does not influence learning more equivariant features in the early parts of the network. We also study how different data augmentations influence translation equivariance. Furthermore, we show that models with lower capacity learn more translation equivariant features. Lastly, we quantify translation and rotation equivariance in different state-of-the-art image classification models and analyse the correlation between the amount of equivariance and accuracy.

Files