HJ
H. Jamali-Rad
10 records found
1
Aligning diffusion model outputs with downstream objectives is essential for improving task-specific performance. Broadly, inference-time training-free approaches for aligning diffusion models can be categorized into two main strategies: sampling-based methods, which explore mul
...
Text-to-image (T2I) diffusion models have achieved remarkable image quality but still struggle to produce images that align with the compositional information from the input text prompt, especially when it comes to spatial cues. We attribute this limitation to two key factors: th
...
Proteins are fundamental biological macromolecules essential for cellular structure, enzymatic catalysis, and immune defense, making the generation of novel proteins crucial for advancements in medicine, biotechnology, and material sciences. This study explores protein design usi
...
Masked Autoencoders (MAEs) represent a significant shift in self-supervised learning (SSL) due to their independence from augmentation techniques for generating positive (and/or negative) pairs as in contrastive frameworks. Their masking and reconstruction strategy also aligns we
...
The Deep Neural Network (DNN) has become a widely popular machine learning architecture thanks to its ability to learn complex behaviors from data. Standard learning strategies for DNNs however rely on the availability of large, labeled datasets. Self-Supervised Learning (SSL) is
...
BECLR
Batch Enhanced Contrastive Unsupervised Few-Shot Learning
There exists a fundamental gap between human and artificial intelligence. Deep learning models are exceedingly data hungry for learning even the simplest of tasks, whereas humans can easily adapt to new tasks with just a handful of samples. Unsupervised few-shot learning (U-FSL)
...
Current methods in Federated and Decentralized learning presume that all clients share the same model architecture, assuming model homogeneity. However, in practice, this assumption may not always hold due to hardware differences. While prior research has addressed model heteroge
...
Self-Supervised Few Shot Learning
Prototypical Contrastive Learning with Graphs
A primary trait of humans is the ability to learn rich representations and relationships between entities from just a handful of examples without much guidance. Unsupervised few-shot learning is an undertaking aimed at reducing this fundamental gap between smart human adaptabilit
...
TRIDENT
Transductive Variational Inference of Decoupled Latent Variables for Few Shot Classification
The versatility to learn from a handful of samples is the hallmark of human intelligence. Few-shot learning is an endeavour to transcend this capability down to machines. Inspired by the promise and power of probabilistic deep learning, we propose a novel variational inference ne
...
Binary Neural Networks (BNNs) are receiving an upsurge of attention for bringing power-hungry deep learning towards edge devices. The traditional wisdom in this space is to employ sign(.) for binarizing featuremaps. We argue and illustrate that sign(.) is a uniqueness bottleneck,
...