Paper: https://arxiv.org/abs/2310.20092
Abstract:
Diffusion models are a family of generative models that yield record-breaking performance in tasks such as image synthesis, video generation, and molecule design. Despite their capabilities, their efficiency, especially in the reverse denoising process, remains a challenge due to slow convergence rates and high computational costs. In this work, we introduce an approach that leverages continuous dynamical systems to design a novel denoising network for diffusion models that is more parameter-efficient, exhibits faster convergence, and demonstrates increased noise robustness. Experimenting with denoising probabilistic diffusion models, our framework operates with approximately a quarter of the parameters and 30% of the Floating Point Operations (FLOPs) compared to standard U-Nets in Denoising Diffusion Probabilistic Models (DDPMs). Furthermore, our model is up to 70% faster in inference than the baseline models when measured in equal conditions while converging to better quality solutions.
No, it isn’t. It’s perfectly comprehensible.
The description of the architecture isn’t incredibly clear, but it’s enough to get the idea. I’d have liked to see the details, but if they want to write it like this that’s fine.
Care to write a clear explanation of the method here?
They use Neural ODEs as the denoiser, using multiple neural ODEs in a chsin and they somehow stick a time emedding into them.