According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board’s actions.
The maker of ChatGPT had made progress on Q*, which some internally believe could be a breakthrough in the startup’s search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.
I am slightly confused.
So, Sam Altman and collegues discovered a very powerful thing called Q*. This will make OpenAI very powerful and will make the board a lot of money.
So, why did this cause the board to fire him?
From the article, it seemed like the board was too afraid of Q* and fired him to stop it being released without proper security features.
Could someone please help clarify this?
Thanks.
It’s complicated. The board at OpenAI is (or was) focused on AI safety and is not entirely comprised of investors. Their goal was not to maximize profits.
I forget the exact phrasing, but the board said they fired Altman for not being completely honest with them. Based on the wording of the board’s rationale for firing Altman, it seems likely that Altman was not forthright about the capabilities of this breakthrough, possibly because the board would then halt its development out of safety concerns.