Portuguese is my mother tongue. I’ve tried ChatGPT, GPT 4, Claude, local LLMs, etc and they all produce inaccurate results that I still have to edit a lot. They also can’t remember well if a character is male or female. It feels like they all have the same capabilities as Google Translate, which as far as I know, isn’t an AI.
Edit: Compare the grammar and spelling of properly uncensored models with censored ones, when it comes to “gendered” grammar and natural language concepts. There is a huge difference.
That’s not the question of language, that’s the result of promoting diversity through censorship, and the worst part is, none of the big companies give a faintest whiff of a fuck about diversity or minorities, they are just crossing things off of their marketing bucket list.
Censorship causes massive performance degradation. If you mindlessly force a model to have a bias, no matter the subject, the model will propagate this bias throughout its entire base knowledge.
Good or bad, a bias is a bias, because we are talking about computers, which are deterministic and literal and computer code only ever takes things literally, and LLMs are barely the first step towards generalisation and unassisted extrapolation.
Even when the general concept of say, fighting gender discrimination, is good in its core, force-feeding that to an LLM which is a computer code after all, will do stuff like making it completely loose the concept of genders, including linguistics concepts, solely because they share the word “gender” through literature and so the training dataset.
Yes, discrimination is stupid, racism is stupid, forcing everybody to live exactly the one “right” way is stupid.
But using censorship, to fight this, is hands down the dumbest, and laziest way possible.
i really feel like you should find a different word for bots not being as dirty as you want than “censorship”, a bot deciding to be circumspect isn’t what “censorship” means, there are lots of countries where you can get killed for being a journalist, there’s a lot of places where it’s really hard to get news out b/c of direct government control of all of the media, so i don’t think it’s polite to use the same word for you wish robots were more racist
ChatGPT suggested the term ‘ideological engineering’ but I’m not sure that fully captures the nature of the problem.
When you ask bot who is Jim Hopkins, and it says it will not answer this question because it respects the private life of persons, then yes, I want bots and people to be more racist.