https://arxiv.org/abs/2310.17680

Ok, technically a tiny language model for now:

Imagine a developer who can only change their last line of code, how often would they have to start writing a function from scratch before it is correct? Auto-regressive models for code generation from natural language have a similar limitation: they do not easily allow reconsidering earlier tokens generated. We introduce CodeFusion, a pre-trained diffusion code generation model that addresses this limitation by iteratively denoising a complete program conditioned on the encoded natural language. We evaluate CodeFusion on the task of natural language to code generation for Bash, Python, and Microsoft Excel conditional formatting (CF) rules. Experiments show that CodeFusion (75M parameters) performs on par with state-of-the-art auto-regressive systems (350M-175B parameters) in top-1 accuracy and outperforms them in top-3 and top-5 accuracy due to its better balance in diversity versus quality.

And only for code. And seems it is much slower. But looks extremely interesting as “proof of concept”.

I think that instead of a lot of “denoising” steps to generate text from gibberish, a dual-model system that takes a typical autoregressive input and than runs a few “denoising” steps to look for errors and inconsistencies might be best of both worlds, instead of typical methods of increasing model output quality like progressive refinement that require rewriting entire text token-by-token several times…

  • Disastrous_Elk_6375@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Intuitively diffusion-based models for code generation make a lot of sense, glad to see people spending time on it. Really curious to see what can come out of it, even if it’s an intermediary step to be used in conjunction with LLMs (i.e. the diffusion model works with pseudocode and LLM translates pseudocode into actual language-specific implementations)

  • Illustrious-Lake2603@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I love this approach! Feels like a diffusion model would work Perfect with code! Now Im praying this Model will play nicely with C#!

  • saintshing@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Instead of using gaussian noise(in the latent space), I wonder if we can introduce noise by randomly inserting/deleting/replacing/swaping words. Cant we train a BERT model to predict the original text from a noise-added text?

    • mushytaco@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      This has been explored a little for nlp and even audio tasks (using acoustic tokens)!

      https://aclanthology.org/2022.findings-acl.25/ and https://arxiv.org/abs/2307.04686 both come to mind

      Feel like diffusion and iterative mask/predict are pretty conceptually similar—my hunch is that diffusion might have a higher ceiling by being able to precisely traverse a continuous space, but operating on discrete tokens probably could converge to something semantically valid w fewer iterations.

      Also Bert is trained w MLM which technically is predicting the og text from a “noisy” version, but noise is only introduced via masking, and it is limited to a single forward pass, not iterative!