There exists a fundamental gap between human and artificial intelligence. Deep learning models are exceedingly data hungry for learning even the simplest of tasks, whereas humans can easily adapt to new tasks with just a handful of samples. Unsupervised few-shot learning (U-FSL)
...
There exists a fundamental gap between human and artificial intelligence. Deep learning models are exceedingly data hungry for learning even the simplest of tasks, whereas humans can easily adapt to new tasks with just a handful of samples. Unsupervised few-shot learning (U-FSL) aspires to bridge this gap, without relying on costly annotations. Inspired by the efficiency of contrastive representation learning, we propose a novel batch enhanced contrastive U-FSL pretraining methodology (coined as BECLR) to infuse instance- and class-level insights
within a contrastive framework. To enable the sampling of meaningful positives, we introduce an innovative dynamic clustered memory module (DyCE), which maintains highly-separable latent space partitions, through iterative equipartitioned updates. We also propose an effective, optimal transport (OT)-based feature alignment strategy (OpTA), to address sample bias in the U-FSL inference stage and further boost the end-to-end performance of BECLR in low-shot settings. Our extensive experimental evaluation corroborates the efficacy of our design choices
in BECLR, which sets a new state-of-the-art on the most widely adopted U-FSL benchmarks miniImageNet and tieredImageNet (offering up to 14% and 12% improvements, respectively), as well as on challenging cross-domain scenarios.