Paper: https://arxiv.org/abs/2310.20092

Abstract:

Diffusion models are a family of generative models that yield record-breaking performance in tasks such as image synthesis, video generation, and molecule design. Despite their capabilities, their efficiency, especially in the reverse denoising process, remains a challenge due to slow convergence rates and high computational costs. In this work, we introduce an approach that leverages continuous dynamical systems to design a novel denoising network for diffusion models that is more parameter-efficient, exhibits faster convergence, and demonstrates increased noise robustness. Experimenting with denoising probabilistic diffusion models, our framework operates with approximately a quarter of the parameters and 30% of the Floating Point Operations (FLOPs) compared to standard U-Nets in Denoising Diffusion Probabilistic Models (DDPMs). Furthermore, our model is up to 70% faster in inference than the baseline models when measured in equal conditions while converging to better quality solutions.

https://preview.redd.it/djk9mdlc9e0c1.png?width=995&format=png&auto=webp&s=65a002f1f320e68b71753ac32c6386c22e76c1c9

https://preview.redd.it/i87gizkc9e0c1.png?width=1108&format=png&auto=webp&s=34f25ecc319ffa34f545e850a5c95cb007e0abd8

  • CatalyzeX_code_bot@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    No relevant code picked up just yet for “Beyond U: Making Diffusion Models Faster & Lighter”.

    Request code from the authors or ask a question.

    If you have code to share with the community, please add it here 😊🙏

    To opt out from receiving code links, DM me.

    • impossiblefork@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      No, it isn’t. It’s perfectly comprehensible.

      The description of the architecture isn’t incredibly clear, but it’s enough to get the idea. I’d have liked to see the details, but if they want to write it like this that’s fine.