I’m planning to fine-tune a mistral model with my own dataset. (full fine-tune, not LORAs)
The dataset is not that large, around 120 mb in jsonl format.
My questions are:
- Will I be able to fine-tune the model with 4 cards of 40G A100?
- If not, is using runpod the easiest approach?
- I’m trying to instill knowledge in a certain language, for a field it does not have sufficient knowledge in said language. Is fine-tuning my only option? RAG is not viable in my case.
Thanks in advance!
you get 14hrs of a100 80gb with 25 dollars.