undefdev@alien.topBtoLocalLLaMA@poweruser.forum•Training LLMs to follow procedure for Math gives an accuracy of 98.5%English
1·
1 year agoI don’t understand the motivation behind this.
Fine, you’ve ran an experiment out of curiosity and you got the result, but why would you want to finetune more language models on this?
It’s not like we need models that are almost as good at things computers are excellent at, while using orders of magnitude more resources.
It would be way more useful to train tiny models to predict when a calculator should be used.
Calculus, linear algebra and mathematics in general is a good idea. Arithmetics is probably not. To me that’s like training LLMs to count up to high numbers correctly. I’m arguing that instead of reading a book on “the first 10^12 natural numbers” one should read a book on linear algebra.