Diffusion Models Acceleration: A Quick Survey

Student Report (2025)
Author(s)

F. Nardi Dei da Filicaia Dotti (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Basile Lewandowski

Contributor(s)

Lydia Y. Chen – Mentor (TU Delft - Data-Intensive Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
07-11-2025
Awarding Institution
Delft University of Technology
Programme
['Computer Science']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This survey explores state-of-the-art advancements in accelerating diffusion models, focusing on techniques to address their computational and memory inefficiencies. Diffusion models have achieved remarkable success in generative AI, surpassing prior paradigms like GANs in various applications, including image synthesis, text-to-image generation, video generation and more. However, their reliance on a large number of sequential sampling steps significantly hinders their efficiency compared to other generative approaches. This survey categorises and analyses 11 recent works aimed at overcoming these challenges, including quantization techniques, knowledge distillation, and distributed parallel sampling. Through this survey, we aim to provide an understanding, intuition, theory and tradeoffs behind these techniques. Finally, this work offers a valuable reference for researchers and professionals seeking to enhance or utilise fast diffusion model architectures, providing a clear overview of benchmarking parameters used for each of these works.

Files

License info not available