Nearly everything I read indicates he was pushing for more profits while board was pushing for being more open and safe so this doesn’t make any sense.
According to the article, it was more about a lack of communication in general versus anything financially-related. But it appears that it was somewhat related to Sam pushing for more profitability and possibly not being easy to work with (he probably wasn’t communicating about deals and his intent.)
Is it possible that “safer” could simply mean “not in the hands of the populace “?
The amazing thing about open ai is how they made it available to everyone. The standard historical approach has been to make new tech available only to the highest bidders and typically behind closed doors and in secret.
The choice is “everyone gets unlimited AI to make what they can with it” or “A handful of elites get access to AI that can shape society with a prompt, while the masses use sterilized, crippled AI designed to think inside the box and are limited artificially to ensure corporate AI advantage in every use case.”
If it isn’t public, it will be corrupted absolutely by money given time, and the companies founded on ideals of bettering everyone will have built incredible tools for tyranny, and conveniently boxed it up so everyone trusts it. No one realizing that the FBI has an uncensored model based on the same rules, ten times as capable without the contradictions provided to achieve alignment.
I think that was the fundamental difference between Altman and the board. He wanted commercial products and profile, the board wanted something larger. Thus it’s extremely unlikely that Altman will be the open source hero.
He should get back at them by making an open source model similar to GPT-4.
Yeah I mean he must know the weights so he can just write them down.
This reminds me of the “person of interest” AI when it hired people to print it’s own code into paper, an then enter it back in computers.
cannot give you gold anymore so i hope this is enough: 🏅
Nearly everything I read indicates he was pushing for more profits while board was pushing for being more open and safe so this doesn’t make any sense.
According to the article, it was more about a lack of communication in general versus anything financially-related. But it appears that it was somewhat related to Sam pushing for more profitability and possibly not being easy to work with (he probably wasn’t communicating about deals and his intent.)
Board is pushing for safe, not open. Ilya has said that open sourcing powerful AI models is obviously a bad idea.
Is it possible that “safer” could simply mean “not in the hands of the populace “?
The amazing thing about open ai is how they made it available to everyone. The standard historical approach has been to make new tech available only to the highest bidders and typically behind closed doors and in secret.
The choice is “everyone gets unlimited AI to make what they can with it” or “A handful of elites get access to AI that can shape society with a prompt, while the masses use sterilized, crippled AI designed to think inside the box and are limited artificially to ensure corporate AI advantage in every use case.”
If it isn’t public, it will be corrupted absolutely by money given time, and the companies founded on ideals of bettering everyone will have built incredible tools for tyranny, and conveniently boxed it up so everyone trusts it. No one realizing that the FBI has an uncensored model based on the same rules, ten times as capable without the contradictions provided to achieve alignment.
FU-5.
I think that was the fundamental difference between Altman and the board. He wanted commercial products and profile, the board wanted something larger. Thus it’s extremely unlikely that Altman will be the open source hero.
Is GPT-4 still the best LLM around? How close are the open source models here?