Give it 5 years with the Mac Studio. Next year 256gb, will go up real quick.
Give it 5 years with the Mac Studio. Next year 256gb, will go up real quick.
cheers! ill keep a close watch on this, nice work!
thanks, ill have a look. It seems very promising with my use case as well. Btw is nitro different than the download you have on the main page? Nitro seems only for m1 models of apple and on main page it mentions m2 models as well?
chrome is marking the download as suspicious from the github repo
explain it to you
lol, might as wel spoken mandarin, thhis is so far away from my math skills. I have no clue what this guy is saying
Maybe they will be a bit more reasonable with their content filtering, some the shit you see now is ridiculous.
The competition for running locally is catching up to gpt. I’m happy to pay for everything but openai is so restricted these days, you’re better off with running something locally.
It’s 2 dollars per hour man, it’s a great solution to try stuff out. Also if you already know what you need to do you won’t be needing that much time.
Last comment on GitHub seems a way safer option. On every reboot just do sudo sysctl iogpu.wired_limit_mb= and you’re done. No weird patching and booting and what nots.
But amazing knowledge again. Especially for the 192 gb version, this could open up some extra doors. I assume the m3 next year will be 256gb so that would bring some as we one models to the table!