I think just a weekly pinned thread “what model are you using?” is good
I think just a weekly pinned thread “what model are you using?” is good
It’s definitely not useless, it’s just doesn’t understand instructions as literally as big models. Instead of asking it to write eloquently, if you play with delivering instructions in the context of what you want, it’ll better convey the meaning.
Bad: “Write like you’re in a wild west adventure.”
Good: “Yee-haw, partner! Saddle up for a rip-roarin’, gunslingin’ escapade through the untamed heart of the Wild West!”
You also can’t force it to what you want it to do. It depends on the training data. If I want it to output a short story, sometimes requesting a scenario, or book summary will give wildly different results.
I think prompt engineering is entirely for smaller models.
gguf when
I agree. It’s possible it’s that small but I just think that’s unlikely.
100% recommend the new voice conversation feature with chatGPT. It feels incredibly therapeutic.
Otherwise Samantha is really good.
But also. It’s okay to break down, especially among others.
That’s what it seems like to me. Not really any sort of alarming policy, I can’t imagine them releasing anything less then this. It’s just saying if you’re training a model, you need to keep us informed and make sure it’s safe, and to help us make sure the impending AI future doesn’t destroy the economy.
But the cat is out of the bag for large open models, unless there is some sort of massive crackdown. The US is incapable of policing the internet.
Yeah I just roll my eyes and continue onwards