There has been a lot of movement around and below the 13b parameter bracket in the last few months but it’s wild to think the best 70b models are still llama2 based. Why is that?

We have 13b models like 8bit bartowski/Orca-2-13b-exl2 approaching or even surpassing the best 70b models now

  • Vilzuh@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I have been trying to learn about fine-tuning and lora training for the past couple weeks but I’m having trouble finding easy enough resources to learn from. Could you give me some pointers to what I can read to get started with finetuning llama2 or mistral?

    I have tried training quantized models locally with oobabooga and llama.cpp and I also have access to runpod. Really appreciate any info!