Right now it seems we are once again on the cusp of another round of LLM size upgrades. It appears to me that having 24gb VRAM gets you access to a lot of really great models, but 48gb VRAM really opens the door towards the impressive 70B models and allows you to nicely run the 30B models. However, im seeing more and more 100B+ models being created that push the 48 gb VRAM specs down into lower quants if they are able to run the model at all.
this is in my opinion is big, because 48gb is currently the magic number for in my opinion consumer level cards, 2x 3090’s or 2x 4090s. adding an extra 24gb to a build via consumer GPUs turns into a monumental task due to either space in the tower or capabilities of the hardware AND it would put you at 72gb VRAM putting you at the very edge of the recommended VRAM for the 120GB 4KM models.
I genuinely don’t know what i am talking about and i am just rambling, because i am trying to wrap my head around HOW to upgrade my vram to load the larger models without buying a massively overpriced workstation card. should i stuff 4 3090’s into a large tower? settle up 3 4090’s in a rig?
how can the average hobbyist make the jump from 48gb to 72gb+?
is taking the wait and see approach towards nvidia dropping new scalper priced high VRAM cards feasible? Hope and pray for some kind of technical magic that drops the required VRAM while simultaneously keeping quality?
the reason i am stressing about this and asking for advice is because the quality difference between smaller models and 70B models is astronomical. and the difference between the 70B models and the 100+B models is a HUGE jump too. from my testing it seems that the 100B+ models really turn the “humanization” of the LLM up to the next level, leaving the 70B models to sound like…well… AI.
I am very curious to see where this gets to by the end of 2024, but for sure… i won’t be seeing it on a 48gb VRAM set up.
Building a system that supports two 24GB cards doesn’t have to cost a lot. Boards that can do dual 8x PCI and cases/power that can handle 2 GPUs isn’t very hard. The problem I see past that is you’re running into much more exotic/expensive hardware. AMD Threadripper comes to mind, which is a big price jump.
Given that the market of people that can afford that is much lower than dual card setups, I don’t feel like we’ll see the lion’s share of open source happening at that level. People tend to tinker on things that are likely to get used by a lot of people.
I don’t really see this changing much until AMD/Intel come out with graphics cards that bust the consumer card 24GB barrier to compete with Nvidia head on in the AI market. Right now Nvidia won’t do that, as to not compete with their premium priced server cards.