Google released T5X checkpoints for MADLAD-400 a couple of months ago, but nobody could figure out how to run them. Turns out the vocabulary was wrong, but they uploaded the correct one last week.

I’ve converted the models to the safetensors format, and I created this space if you want to try the smaller model.

I also published quantized GGUF weights you can use with candle. It decodes at ~15tokens/s on a M2 Mac.

It seems that NLLB is the most popular machine translation model right now, but the license only allows non commercial usage. MADLAD-400 is CC BY 4.0.

  • remixer_dec@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Thanks a lot for converting and quantizing these. I have a couple of questions.

    How does it compare to ALMA? (13B)

    Is it capable of translating more than 1 sentence at a time?

    Is there a way to specify source language or does it always detect it on its own?

    • jbochi@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thanks!

      - I’m not familiar with ALMA, but it seems to be similar to MADLAD-400. Both are smaller than NLLB-54B, but competitive with it. Because ALMA is a LLM and not a seq2seq model with cross-encoding, I’d guess it’s faster.
      - You can translate up to 128 tokens at the time.
      - You can only specify the target language, not the source language.

  • phoneixAdi@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Nice thank you!! Tried in space. Works well for me. Noob question. Can I run this with llama.cpp? Since it’s gguf. Can I download this and run it locally?

  • redditmias@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Nice, I will check madlad later. Now, I thought seamless4MT was the best translation model from meta, I didnt even know about this NLLB existed. Does anyone have used both and can point out the difference? seamless4mt seemd amazingly good in my experience, but have less languages perhaps, idk

  • justynasty@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The koboldcpp-1.46.1 (from October) says ERROR: Detected unimplemented GGUF Arch. It’s best to get the newest version of the backend.

  • zippyfan@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’ve been relying on Claude AI to translate Korean texts to english. I’m excited to use a local version if the context window is large enough.

    I haven’t tested it but I’m surprised to see llms good enough to translate multiple languages running locally. I expected to see one to one language translation llms before this. Like an llm dedicated to Chinese - English translation, another llm dedicated to Korean - French etc.

    • FanFlow@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’ve been relying on Claude AI to translate Korean texts to english.

      So I did with korean novel chapters, but since yesterday it started to either refuse translate, stopping in 1/6 of the text or writing some sort of summaries instead of translations.

    • jbochi@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Sorry to be pedantic, but the translation models they released are not LLMs. They are T5 seq2seq models with cross-encoding, as in the original Transformer paper. They did also release a LM that’s a Decoder-Only T5. They tried few-shot learning with it, but it performs much worse than the MT models.

      I think that the first multilingual Neural Machine Translation model is from 2016: https://arxiv.org/abs/1611.04558. However, specialized models for pairs of languages are still popular. For example: https://huggingface.co/Helsinki-NLP/opus-mt-de-en

    • lowkeyintensity@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Gibberish names have been a things since the 90s. It’s hard coming up with a name when everyone is racing to create the next Big Thing. Also, I think techies are more tolerant of cumbersome names/domains.

  • k0setes@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Does anyone know how it compares with Google Translate and DeepL. I’m guessing since google released it it will work worse than Google Translate 🤷‍♂️

  • lowkeyintensity@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Meta’s NLLB is supposed to be the best translator model, right? But it’s for non-commercial use only. How does MADLAD compare to NLLB?

    • HaruSosake@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      NLLB has horrible performance, I’ve done extensive testing with it and wouldn’t even translate a children’s book with it. Google Translator does a much better job and that’s saying something. lol

    • jbochi@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The MADLAD-400 paper has a bunch of comparisons with NLLB. MADLAD beats NLLB in some benchmarks, it’s quite close in others, and it loses some. But the largest MADLAD is 5x smaller than the original NLLB. It also supports more 2x more languages.

  • vasileer@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I tested the 3B model for Romanian, Russian, French, and German translations of the “The sun rises in the East and sets in the West.” and it works 100%: it gets 10/10 from ChatGPT

      • Igoory@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yes, it indeed works. I managed to run the 10B model on CPU, it uses 40GB of ram, but somehow I felt like your 3b space gave me a better translation.

        • cygn@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          How do you load the model? I pasted jbochi/madlad400-3b-mt in the download model field and used “transformers” model loader, but it can’t handle it. OSError: It looks like the config file at ‘models/model.safetensors’ is not a valid JSON file.

          • Igoory@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I think I did exactly like you say, so I have no idea why you got an error.

  • Serious-Commercial10@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    For most people, they only need a few languages, such as en cn jp. If there are multiple combination versions, I will use it to develop my own translation application

  • yugaljain1999@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    u/jbochi when I was trying to load your huggingface model (madlad400-3b-mt), then while loading tokenizer getting this value error. Can u pls tell me how we can resolve that?

    ValueError Traceback (most recent call last) / ValueError: Non-consecutive added token '' found. Should have index 256100

  • yugaljain1999@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    @jbochi , Is it possible to run cargo example for batch inputs?

    cargo run --example t5 --release --features cuda – \ –model-id “jbochi/madlad400-3b-mt” \ –prompt “<2de> How are you, my friend?” \ –temperature 0

    Thanks