I hope GPT-3 becomes opensource with Mira Murati as CEO
Mira Murati is pro super closed ai, regulation, etc. they’re firing sama because they think he’s moving too fast and maybe too open
They asked uncensored GPT 5 if he should be and it recommended this
Pausing chatGPT plus subscriptions followed by CEO getting fired. What does it tell?🤔
I think the actual story is going to be a lot more boring and stupid than we think. It always is. I call it Altman’s Razor.
My guess is that on devday he over promised on two fronts
- how much they could commercialise the GPTs (the unit economics don’t quite work)
- how much he could legally commercialise a non profit company
He probably told the board a few lies and about how much they were going to commercialise and opted to ‘ask for forgiveness rather than permission’. When they found out they went at him hard and did not forgive him.
I think it’s stupid because they should have resolved this via negotiation and threats, not by firing one of tech’s most successful dealmakers 🤣
If I were on the board I would fire him for trying to commercialize a nonprofit. I am hoping that’s what happened but yeah I feel like it’s something else. Although it seems likely he has a financial stake in Microsoft that he’s been hiding.
Altman’s Razor 😂
Altman’s Razor looking real good rn
Based on Kara Swisher’s tweet, sounds like he wants to just go make a for-profit company, whereas most of the board wanted to keep to the non-profit mission of the company.
Other than the fact it’s hemorrhaging money, not sure OpenAI still is going the direction of a non profit anymore. Or could survive staying one.
Yeah, might not be able to, I’m guessing that’s Sam’s position. If he wants to keep testing “where does scaling compute take us”, that requires a serious bankroll.
Are we ever really going to know the story here? Not only are we dealing with basic human behavior no matter how highly educated or talented, but throw in the plot twist of AI being the central focus. What version they are sharing and what version they are actually playing with is a wide open question. What a ride!
Helen Toner and Ilya Sutskever (Chief Scientist) seem to have had different perspectives on Altman’s product goals at OpenAI. It’s like they don’t *wan’t* AI to become a massive economic success and would rather it becomes more of an academic initiative?
The entities who want to take over AI don’t need a strong economy they need a population and economy that they can manipulate and control.
It is most plausible the board found out something where this was their only choice given their fiduciary duties. I’m betting OpenAI trained their next generation models on a ton of copyrighted data, and this was going to be made public or otherwise used against them. If the impact of this was hundreds of millions of dollars or even over a billion (and months of dev time) wasted on training models that have now limited commercial utility, I could understand the board having to take action.
It’s well known that many “public” datasets used by researchers are contaminated with copyrighted materials, and publishers are getting more and more litigious about it. If there were a paper trail establishing that Sam knew but said to proceed anyway, they might not have had a choice. And there are many parties in this space who probably have firsthand knowledge (from researchers moving between major shops) and who are incentivized to strategically time this kind of torpedoing.
It’s already been decided that using copyrighted material in AI models is fine and not subject to copyright in multiple court cases though.
This is far from completely litigated, and even if the derivative works created by generative AI that has been trained on copyrighted material are not subject to copyright by the owners of the original works, this doesn’t mean:
- companies can just use illegally obtained copyrighted works to train their AIs
- companies are free to violate their contracts, either agreements they’re directly a party to, or implicitly like the instructions of robots.txt on crawl
- users of the models these companies produce are free from liability for decisions made in data inclusion on models they use
So I’d say that the data question remains a critical one.
The new CEO was just started to have interest on AI during her work at Tesla? 😮
Everyone who wrote a python script that parses training data is an AI scientist these days….
Some didn’t like ‘Move fast break things’
Honestly, I’m pretty shocked by this.
I am sure it is good or bad news. However, personally, I prefer Murati than Sam.
I bet he leaked all the Jailbreak prompts. That fiend.
Never cared for corporate drama. Rich people playing their games, believing themselves to be the center of the world. Let the corporation burn
My guess is the powers that be wanted “their guy” in that will do whatever is told of them.
Sam was probably too problematic at this point.