I have access to a single 80Gb A100 GPU and would like to train an LLM with GPT-like architecture from scratch. Does anyone know how to calculate the maximum model size.

  • Consistent_Area9877@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Recently just took the GenAI LLM course on coursera. A basic calculation is 1B params can be trained on a SINGLE A100 80GB GPU using bfloat16 quantization with room to spare.

    I think it can consume up to 40GB of memory hence you can’t really go to 2B params. But that also means you might be okay with 1.5B without going over the 80GB limit