- 0 Posts
- 8 Comments
Joined 2 years ago
Cake day: October 28th, 2023
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
daHaus@alien.topBtoData Hoarder@selfhosted.forum•Are NAS-rated drives REALLY worth it?English1·2 years agoAssume it’s all marketing unless proven otherwise
daHaus@alien.topBto LocalLLaMA@poweruser.forum•Quantizing 70b models to 4-bit, how much does performance degrade?English1·2 years agoThis seems like something that would be difficult to predict considering how fundamental what your changing is. The method you use to quantize it and how refined it is also matters a great deal.
daHaus@alien.topBto LocalLLaMA@poweruser.forum•Is it just me or is prompt engineering basically useless with smaller models?English1·2 years agoTry this for a prompt and go from there: “Describe yourself in detail.”
What’s your context and batch size? Using llama.cpp 1024 B context & 1024 B batch it acted like you described.
With a context and batch size of 1536B it performed as expected. openorca/mistral 7B
Congratulations! You’ve graduated from Windows. Time to upgrade.
daHaus@alien.topBto LocalLLaMA@poweruser.forum•What is the best 7B model for reading comprehension and instruction following at the moment?English1·2 years agoIMO mistral-7B-orca, jazzing it up by telling it it’s a subject matter expert and not to guess seems to help too
Is there a technical reason they begin with “You are” and not “You’re” or is it just an oversight?