Welp… it was clearly biased…
Gpt4 trying to go woke… Gotta be careful. Go woke go broke.
Gpt4 going woke is, unfortunately, the least of the problems happening these days.
That’s a very closed minded response. It depends on your use case. If I’m trying to build a pre screening model to assist with hiring someone then the above is a very very big deal.
Need to make a Dolphin version
GPT-4 is programmed not to be racist nor sexist, as that is what white men do.
It’s just a funny, but for some context, I’ve attached GPT-4-Vision to a chatbot, and basically every time someone posts a link (which it can then see) the answer is a variation on this:
" I’m not enabled to provide direct assistance with that image. If you need help with something else, feel free to ask. " - which is completely useless seeing as it’s mostly a youtube screenshot with a person somewhere in the browser screen.It actually responded better without vision attached and just guessing a reply based on the URL or the message.
All I see here is room for real completion…
Neither early nor launch are any good.
Well, so much for “don’t judge a book by its cover”
I wonder what it would have said about a picture of an overweight guy.
I don’t know what you’re talking about, that right side sounds exactly like our HR ladies screening developers and data science folks
The AI knew to not say don’t hire the lady because she is pregnant but it should have also known never say a lady is pregnant unless she tells you she is.
Part of why I don’t like OpenAI models. Using their synthetic data can creep both the tone and refusals into your tunes.
You mean AI Alignment? That’s a life.