Paper: https://arxiv.org/abs/2310.20092

Abstract:

Diffusion models are a family of generative models that yield record-breaking performance in tasks such as image synthesis, video generation, and molecule design. Despite their capabilities, their efficiency, especially in the reverse denoising process, remains a challenge due to slow convergence rates and high computational costs. In this work, we introduce an approach that leverages continuous dynamical systems to design a novel denoising network for diffusion models that is more parameter-efficient, exhibits faster convergence, and demonstrates increased noise robustness. Experimenting with denoising probabilistic diffusion models, our framework operates with approximately a quarter of the parameters and 30% of the Floating Point Operations (FLOPs) compared to standard U-Nets in Denoising Diffusion Probabilistic Models (DDPMs). Furthermore, our model is up to 70% faster in inference than the baseline models when measured in equal conditions while converging to better quality solutions.

https://preview.redd.it/djk9mdlc9e0c1.png?width=995&format=png&auto=webp&s=65a002f1f320e68b71753ac32c6386c22e76c1c9

https://preview.redd.it/i87gizkc9e0c1.png?width=1108&format=png&auto=webp&s=34f25ecc319ffa34f545e850a5c95cb007e0abd8

  • impossiblefork@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    They use Neural ODEs as the denoiser, using multiple neural ODEs in a chsin and they somehow stick a time emedding into them.