I have access to a single 80Gb A100 GPU and would like to train an LLM with GPT-like architecture from scratch. Does anyone know how to calculate the maximum model size.
Recently just took the GenAI LLM course on coursera. A basic calculation is 1B params can be trained on a SINGLE A100 80GB GPU using bfloat16 quantization with room to spare.
I think it can consume up to 40GB of memory hence you can’t really go to 2B params. But that also means you might be okay with 1.5B without going over the 80GB limit
Depends on how many tokens you have?
This question might come off as stupid, but it’s really something I’m curious about:
I 100% see why someone would like to take a state-of-the-art current open model and fine-tune it on their own data. I don’t see why someone would want to train their own model from scratch. Can you explain it?
With bfloat16 and flash attention you can fully pretrain a 200M parameter encoder-decoder model on millions of data samples in as little as a couple of weeks. You’re going to have to really focus on optimizing your workflow so that you’re limited by gpu utilization, you don’t want your gpu sitting around waiting for data or anything. I’ve also been able to train models with >650m parameters and a sequence length of 4096 on a single A100 using huggingface accelerate, albeit much more slowly