I’d like to think that this will refocus OpenAI towards fundamental research that will deliver the ASI rather than efforts to commercialise fragments.
OpenAIGATE
CEO Nadella “furious”
Not shit, they pony up 10B then bet the future of Microsoft on that “everything is Copilot now” (base on OpenAI) strategy and announced it to the world and boom, get the rug immediately pulled under them. They basically got catfished.
I think a big part of the enthusiasm for AI comes from Microsoft’s deeply and wide lobbying abilities. It would be fascinating to watch them back that out and try and pivot to a new new thing.
This is what went down: Military came and said “We want an AI for war”,
Altman said “Oh hell naw”,
board said "But that’s billions of dollars directly into our personal bank accounts you said no to, get out!’What do people think super-AI is going to do? All it can do is print letters on the screen. Flip a switch, it’s gone. It can’t actually DO anything; it has no body, no thumbs. The smartest AI conceivable can’t do a thing if I take a hammer to it.
What are people scared of???
By producing letters on a screen it can do everything you’re able to do on the Internet, except at scale and faster.
What exactly are you going to hit with your hammer?
I hit the computer it’s running on. This is not rocket science, people. The only thing an LLM can do is spit out characters to a terminal. It can’t kill you or make planes fly into buildings or build a robot army or launch nuclear weapons. It can’t do anything.
All these downvotes and not one counterexample. HOW can an LLM endanger anyone? Simple and serious question. I mean, someone start up a local instance of Llama and use it to start a fire or kill a child or something and prove me wrong here. You just hit ctrl-C and the LLM dies and people are acting like they’re Skynet.
Take your argument further: All any computer can do is maths and spit out letters and numbers.
Yet I’m sure we can agree that computers can be used to control and manage systems remotely that can be used to wreak some havoc when abused.
Generative AI/ML can just be used to do it faster and easier than before.
It’s 2029, you’re made it inside the amazon datacenter. You have a revolver with four bullets left, a crowbar, a can of soda.
“I’m in” Alcalde says into a walkman recording his heroism for posterity.
around you, server racks stretch in every direction, seemingly into infinity. The AI is hacking global GPS, weather, and airport radar computers, changing positional values into nonsense because some idiot told the AI that his dad is going to kill him when he gets home from his trip. Obviously if the plane crashes, the boy’s physical safety will be secured. You want your wife’s plane to land safely
Explain your next move.
You’re being intentionally obtuse here. Right now LLMs are harmless because we only let them print characters to the screen. But suppose you have an assistant version that you allow the ability to execute code. And you ask it to write some code to process an excel file and run it, but while it does that it also decides to copy itself to an external server you don’t know about and starts doing anything there. Without reviewing every thing it does, you can’t be certain that it’s not doing something malicious. But if you have to review everything it does, then it’s not nearly as powerful and helpful for automating tasks as it could be.
You say you can destroy it by destroying the computer it’s on. But you can’t do that. You have no idea what or where any given EC2 instance is located, and if you did, you wouldn’t be able to get there before the AI transfers itself to another computer within a few minutes or seconds.
A truly rogue, intelligent, sentient AI hell bent on damaging the world, unleashed onto the internet could do untold damage to our society.
How can it be a “coup” when the board is allowed to hire and fire the CEO?
I find it somewhat interesting that Sutskever literally seems to have quite the big brain, judging by his head. Is that weird?
ROFL
Hey if Sam Altman is really one of the good ones, now is his chance to create an open-sourced version that rivals ChatGPT and really change the world for the better.
What is open about OpenAI I never understood this.
The name and the vibe.
Ilya has always seemed like a clown to me. Willing to bet he was jealous of the attention Sam was getting and wanted to be the center of attention. Plus his obsession with “AI alignment” is so cringe.
Let’s thank “God” that a redditor isn’t deciding on AI alignment and safety, just so he can use an “uncensored” model to jerk off.
Seems to me that Ilya Sutskever must be some kind of nut job idealist / egoist. I’m tempted to say you can take the data scientist out of Russia, but you can’t take the Russian out of the data scientist. This plays like a Soviet era coup – sudden, poorly thought out, meat fisted, and unlikely to make anything better.
Altman and Brockman are probably going to start their own company, (funded by Microsoft?), poach all of Open AI’s good people, and Open AI is going to go the way of the dodo… or maybe Ilya will have enough money to keep a little clown car / research lab company running or something, but nothing of any consequence is ever going to come out of Open AI ever again. I’d bet a paycheck on it.
The documented sequence of events makes the board (and Ilya in particular) look colossally stupid. Never ceases to amaze me how some very smart people can be so completely clueless from an interpersonal dynamics perspective. Zero EQ. If they were unhappy with Altman there was a right way to handle this, and a million wrong ways. It seems like they asked ChatGPT to give them the absolute worst possible wrong way, and then asked it to write the blog post announcing it.
Seems to me that Ilya Sutskever must be some kind of nut job idealist / egoist.
Funny you say this, since the reason they fired Altman were moves by Altman that were not in the interest of OpenAI, but rather ego moves that threaten the security of the AGI development.
You can stick your racist and stereotypical comments about a person being originally from Russia in your back by the way. The decision came from the whole board and Ilya is Canadian, raised also in Israel.
That’s nonsensical. The reason he was fired was a power struggle between Ilya and Sam. Period. They have different visions for how to achieve AGI, and Ilya is an idealist who wants to try to do it with a small research organization. He has no clue how much capital it takes to achieve what they’re trying to do.
With regard to the rest, Russia isn’t a race, Ilya was born there, and the real decision came from Ilya. Everybody knows this. If Altman comes back, Ilya will be out. What does that tell you?
He has no clue how much capital it takes to achieve what they’re trying to do.
Jesus, maybe you can enlighten him. Or maybe you can realize that money isn’t any issue with OpenAI at the moment. Even Altman said this months ago. It’s not about money, it’s about how to approach it. And OpenAI’s vision is not that of a MS capitalist approach in the long term. Even though they partnered with a capitalist company. But under certain conditions.
With regard to the rest, Russia isn’t a race, Ilya was born there, …
Fun fact: There are no human races to begin with (but that’s what Racist theories were all about). Just the U.S. somehow uses this term (and some other country in the world I forgot about). “Racism” refers to discrimination based on ethnicity, which also includes things like a common nation of origin. So you made a racist statement.
money isn’t any issue with OpenAI at the moment.
The instant this little stunt was pulled, money became a problem. You see, a startup is funded by two streams of capital: revenues, and venture capital. Revenues are, like all startups, insufficient to fund the company, so that leaves VCs. Guess who is flipping the hell out right now trying to force Altman back? That’s right, the venture capitalists. If they don’t get their way, they will pull their capital, and your hero will have precisely enough capital necessary to fund the operations of a hotdog stand.
“Racism” refers to discrimination based on ethnicity
True, but nation of origin dies not in fact predict ethnicity. You know how I know this? I’ll tell you my nation of origin: USA. Now tell me my ethnicity. You can’t. What I said was perhaps a stereotype (a humorous one that gets used in any situation where someone acts like their stereotypical origin, you see it is what’s known as an idiom) of people from Russia, but it is not racist. I don’t even know Ilya’s ethnicity.
“Ego is the enemy of growth.”
What alternate timeline is that clown living in, lol…?
Ilya Sutskeya is what happens when a smart person makes it to adulthood without developing any EQ.
Now there are reports on LinkedIn that the board is in negotiations to bring Altman and Brockman back (probably serious pressure from Microsoft I would guess… like “not only are we not going to partner with you, we’re going to exercise this clause in our contract that removes your access to all of our compute, effective immediately. Try developing GPT-5 on whatever you can scramble together from memory, morons… nobody in their right mind would give you the kind of sweetheart deal we gave you after this stupid stunt. Friggin’ amateurs.”)
Openai dosent need microsoft. The second ms dose such a thing, they got billions of dollars of equipment idling, every customer loses faith in them, and google or amazon sells them compute instead.
They’re in partnership with each other. It’s not a one sided partnership. The VCs didn’t have to expend capital to get compute, under your scenario they would.
With regard to equipment idling, yeah, MS probably wouldn’t love that, but it basically puts them in the same place as Amazon right now.
Google would never go into partnership with them, because OpenAI was founded specifically to compete with Google.
Amazon might, but given such random behavior from the board, Amazon would extract some kind of board level control before they’d expose themselves like Microsoft did.
you act like everyone; apple, amazon, google, facebook, elon musks companies is not spending billions on trying to get what openAI has. If microsoft bitch their way out of the partnership by pushing to hard for sam altman(if they want him so badly, its because hes in their pocket), the others will relize they can take microsofts place, and all they have to do is recive the usage of the by far best AI in the marked, and not try to take over the company. Especially google wants this. if anything just to prevent microsoft from improving bing. but also while gpt4 is the best attempt MS has had and will have at taking over search, gpt4 or whats next could be googles best attempt to take over with their google docs ecosystem. its allready nowhere near as far behind ms-office as bing is google search.
that would be worth 100s of billions to google medium-long term. Facebook could abuse our data even more, and maby make augmented reality acutally usefull if they had a partnership with openAI. Amazon may have less internal uses that i can think of, but they would love to host the API and get a small cut, that becomes a lot of money when its spread over so much use. atleast amazon would be willing to SELL compute the openAI, anyone would. And lastly apple is realy looking to run local AI on their devices. Im sure they would pay billions and billions to openai, in exchange for a (gpt.35 turbo)turbo. Supposedly gpt 3.5 turbo is not that large, so if they downscale it a bit more, they may be able to get it running on apple hardware, and if they downscale it enougth, it wont be usefull for people to reverse for server use. And im sure if anyone could run a local model that was hard to reverse, it would be apple.
Yeah sure, individualy these companies are a lot larger than OpenAi, and completely overpower them. But they are all competing for the same very valuable thing that openAi has acces to. They can bluff and say they dont want it, but openAi dosent even need to take any risk when calling the bluff. they may just pick anyone else.
Bringing back…? That can’t realistically happen without a board reboot. Maybe not at exactly the same time but…you can’t have the ex back without restructuring the board, that will never work.
Try developing GPT-5 on whatever you can scramble together from memory, morons
wait what? this makes no sense
What an epic world class mess by an ambitious board member and a few suckers to pull of a board coup… these types of events in an org, along with M&A are massively disruptive… it takes years and scale as an org to tackle these types of events with process and discipline … this has amateur hour written all over it. They need a real board that works for all of it’s stakeholders and constituents, not primarily for themselves.
Seems like Microsoft’s Satya is furious, and who can blame him? They invested so much in OpenAI and then the board does this sneaky change, regardless of the reasons, is shocking they didn’t communicate with Microsoft… If this article is accurate I bet they will have a much harder time securing funding, no one wants to invest in turmoil and uncertainty.
MS can only blame themselves not doing the minimum research into the governing structure.
Also MS literally just spent 70B on a video game publisher. I don’t think they care that much.
Then they should be furious with Sutskever for wanting to slow things down. Slowing things down is not in the best interest of their shareholders. Sutskever needs to go, now and Sam Altman should be reinstated. Bring on the singularity.
Seems like Microsoft’s Satya is furious,
Prior to the ousting, this was Microsoft’s dream…a path to rapid customer and product commercialization, market dominance and leadership, with billions of dollars at stake…
and the board threw it all out the window in a single moment by putting on the breaks.
I understand the caution, it was probably the right move, but from Microsoft’s point of view., in terms of potential, in the long run it may cost them billions. This is gotta hurt especially since they were blindsided.
That board is fucked. You don’t bite the hand that feeds you…
Seems a miss from Microsoft’s lawyers if they didn’t check out how the board and company was organized before making such a large investment.
And at this point, there are plenty of companies that would jump at the chance to invest/get a controlling interest in OpenAI (and obviously they’d ask for a board seat at the very least) – Google, Apple, even Meta.
Good reminder to not add a couple of nobodies to your board. Lol.
I mean you can be furious about less profits but really this wasn’t that much of a risky move for MS. Most of the money they gave them is literally to pay MS for compute. And then they apparently take most of OpenAI earnings until payd back or something. That’s pretty different from actually giving someone 10B and your money is gone if they go down the drain before getting out of the red numbers.
no one wants to invest in turmoil and uncertainty
Elon Musk’s ears are burning right now.
How is Twitter actually doing? In terms of userbase now v then
Duck that and duck Microsoft. The board was fully correct not to consult or inform Microsoft.
That part made me smile. It is a pretty good news that MS is not in control of OpenAI.
And if it turns out that this drama really happened out of safety concern rather than personal profits or ego, I would like people to take a step back and realize how great of a news as to where we are as a society.
Let the whole saga play out. Microsoft hasn’t even played a card yet.
This is probably great for Microsoft. Their investment got them low level code access and rights, but OpenAI competed with them for AI services. With OpenAI going more towards non-profit, and Sam now being hire-able, Microsoft may have inadvertently acquired the entire business portion of OpenAI.
Now would be a good time for a disgruntled employee to leak some models and make OpenAI actually open. ;)
except that it seems the employees who stayed are the ones least likely to do this.
Is GPT-4 still the best LLM around? How close are the open source models here?
And we find the backend is just Mechanical Turk.
That’s no longer a joke ever since local open source AI models became a thing.
We discover it was Jimmy Apples sending us inferences all this time.
Datasets~