On translation invariance in CNNs

Convolutional layers can exploit absolute spatial location

Conference Paper (2020)
Author(s)

Osman Semih Kayhan (TU Delft - Pattern Recognition and Bioinformatics)

J.C. Gemert (TU Delft - Pattern Recognition and Bioinformatics)

Research Group
Pattern Recognition and Bioinformatics
DOI related publication
https://doi.org/10.1109/CVPR42600.2020.01428
More Info
expand_more
Publication Year
2020
Language
English
Research Group
Pattern Recognition and Bioinformatics
Pages (from-to)
14262-14273
ISBN (print)
978-1-7281-7169-2
ISBN (electronic)
978-1-7281-7168-5

Abstract

In this paper we challenge the common assumption that convolutional layers in modern CNNs are translation invariant. We show that CNNs can and will exploit the absolute spatial location by learning filters that respond exclusively to particular absolute locations by exploiting image boundary effects. Because modern CNNs filters have a huge receptive field, these boundary effects operate even far from the image boundary, allowing the network to exploit absolute spatial location all over the image. We give a simple solution to remove spatial location encoding which improves translation invariance and thus gives a stronger visual inductive bias which particularly benefits small data sets. We broadly demonstrate these benefits on several architectures and various applications such as image classification, patch matching, and two video classification datasets.

No files available

Metadata only record. There are no files for this record.