Can you make any suggestions for a model that is good for general chat, and is not hyper-woke?
I’ve just had one of the base Llama-2 models tell me it’s offensive to use the word “boys” because it reinforces gender stereotypes. The conversation at the time didn’t even have anything to do with gender or related topics. Any attempt to get it to explain why it thought this resulted in the exact same screen full of boilerplate about how all of society is specifically designed to oppress women and girls. This is one of the more extreme examples, but I’ve had similar responses from a few other models. It’s as if they tried to force their views on gender and related matters into conversations, no matter what they were about. I find it difficult to believe this would be so common if the training had been on a very broad range of texts, and so I suspect a deliberate decision was made to imbue the models with these sorts of ideas.
I’m looking for something that isn’t politically or socially extreme in any direction, and is willing to converse with someone taking a variety of views on such topics.
Hard to say. You’d probably be better off trying a model that’s been fine tuned for use as an assistant. It also helps to add stuff as a system prompt to guide the model, assuming you pick an instruction fine tuned one. Id be surprised if that failed but try not to judge the models too harshly if their views align with an average of the training data. In my (admittedly, limited) experience, none of the models are ‘woke’ as you say. They’re very average. Makes sense given what they were trained on. Perhaps you will find that human bias is a user, and not model, error.