Hello, I’m a student delving into the study of large language models. I recently acquired a new PC equipped with a Core i7 14th Gen processor, RTX 4070 Ti graphics, and 32GB DDR5 RAM. Could you kindly suggest a recommended language model for optimal performance on my machine?
first off, why is your title formatted like an article? you mention being a student, so do you imply you want resources for studying the underlying architecture on the large language models? in which case you could watch the channel of “Andej Karpathy” which was pretty enlightening to a layman like me, but it’s pretty hard to progress further than that on the ‘science’ of llms without a cs degree.
other than that, there really isn’t a ‘study’ to be done of llms, as it’s a pretty new field, unless you want to get into hardcore ml stuff with a cs degree and all, as for models, with your not quite ‘cutting edge’ pc, you could try a yi 34b finetune for longer context , though it’s prone to break from my testing, or you could try many smaller 7b models, of which i have been enjoying the whole family of mistral finetunes the most (openorca, openhermes, etc).
for roleplay and stuff the LLaMa Tiefighter model is pretty cool. if you truly want access to cutting edge hardware that will be capable of running the best open source models. eg llama 70b or goliath 120b, you could look into paid cloud gpu services like runpod which are pretty easy to use and i had a mostly positive experience running llms there. hope this answer helps.
Thank you for your response! I apologize for any confusion caused by my title formatting. I did mean to suggest some resources for studying the underlying architecture of large language models, rather than just providing recommendations for specific models or tools. Your suggestions for studying large language models are helpful, and I appreciate your willingness to share your experiences and recommendations with me.
So you are soon gunna realize that unfortunately your pc is not as cutting edge as you think. Your main need is vram. For the 4070 ti you only have 12 gigs of vram. So you will be limited to 7b and 13b models. You can load into ram though but your speeds plummet. Mistal 7b is a good option to start with.
A 24 GB GPU is still limited to fitting a 13B fully in VRAM. His PC is a great one; not the highest end, but perfectly fine to run anything up to a 70B in llama.cpp
I didn’t say it wasn’t. But getting into LLMs really just shows you how much better your PC can be and you will never been as cutting edge as you think or want.
Go buy two rtx3090, your mid tier gpu only has 12GB vram, I see your passion, but do some research, you need more VRAM.
Thank you for your advice. Unfortunately, I’m unable to purchase two RTX 3090s at the moment. Firstly, my budget is fully utilized with the current components, and secondly, I doubt my system can accommodate two RTX 3090s. Considering these constraints, could you kindly provide a recommendation based on my existing setup?
Probably be more economical to rent a a100
The guide I just wrote should be helpful to you,
Thanks