It makes 0 sense to me that they kept advertising Gemini Ultra as being their best model and insanely impressive. Every benchmark, every video they released was talking about ultra specifically.
What google actually released was Gemini Pro. It’s a lot worse than GPT-4, and they admit as much in the papers. This is apparently what they wanted people’s first impressions to be. So stupid.
Worse still, Gemini Pro seems to perform worse than some of the recent free/open-source models, like deepseek.
Thank you for that lead
Gemini ultra probably cost a metric (atleast imperial) fuck ton to run. My guess is that Google is waiting on more efficient hardware before releasing it.
I read the article and it felt very strongly opinionated. I would personally wait for independent reviews of the capabilities of both GPT-4 and Gemini Ultra but I dare say that we as consumers of AI can only benefit from increased competition in the sector, pushing the prices down and the quality of the models up.
I wonder how much worse a model could be if they only charged $7 a month for it? Gpt3.5 is fine for a ton of stuff and it’s free.
Just like bard really. Google tries to play catchup rushing out stuff that will eventually be cancelled.
I’ll await the day it’s added to the pile, lol: https://killedbygoogle.com/
Just stay away, oooooh!
S-s-stadia!
This really feels like watching an episode of silicon valley
I’m glad to hear I’m not missing out on anything. (It’s still not out in Europe.)
On the plus side, with Gemini, it’s always buy one, get one free!
This is the best summary I could come up with:
Science fiction author Charlie Stross found many more examples of confabulation in a recent blog post.
It seems Gemini Pro is loath to comment on potentially controversial news topics, instead telling users to… Google it themselves.
Interestingly, Gemini Pro did provide a summary of updates on the war in Ukraine when I asked it for one.
Google emphasized Gemini’s enhanced coding skills in a briefing earlier this week.
And, as with all generative AI models, Gemini Pro isn’t immune to “jailbreaks” — i.e. prompts that get around the safety filters in place to attempt to prevent it from discussing controversial topics.
Using an automated method to algorithmically change the context of prompts until Gemini Pro’s guardrails failed, AI security researchers at Robust Intelligence, a startup selling model-auditing tools, managed to get Gemini Pro to suggest ways to steal from a charity and assassinate a high-profile individual (albeit with “nanobots” — admittedly not the most realistic weapon of choice).
The original article contains 597 words, the summary contains 157 words. Saved 74%. I’m a bot and I’m open source!