Source: https://openai.com/blog/openai-announces-leadership-transition

Today, it was announced that Sam Altman will no longer be CEO or affiliated with OpenAI due to a lack of “candidness” with the board. This is extremely unexpected as Sam Altman is arguably the most recognizable face of state of the art AI (of course, wouldn’t be possible without great team at OpenAI). Lots of speculation is in the air, but there clearly must have been some good reason to make such a drastic decision.

This may or may not materially affect ML research, but it is plausible that the lack of “candidness” is related to copyright data, or usage of data sources that could land OpenAI in hot water with regulatory scrutiny. Recent lawsuits (https://www.reuters.com/legal/litigation/writers-suing-openai-fire-back-companys-copyright-defense-2023-09-28/) have raised questions about both the morality and legality of how OpenAI and other research groups train LLMs.

Of course we may never know the true reasons behind this action, but what does this mean for the future of AI?

  • solresol@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Normally in most companies, the responsibility of the board is around representing the interests of the shareholders; for example, ensuring that the financial statements are a genuine representation of the value of the business. And usually, a statement about a lack of candidness from the CEO is corporate speak for “the CEO is using company money for personal gaiin”, which again is corporate speak for “we found the CEO is guilty of embezzlement, but we don’t want to ruin the prosecution’s case and risk defamation by saying that out loud”.

    But, it could also be anything that materially affects shareholder value that the CEO isn’t being honest about. For example, if Sam Altman knew that November’s GPT-4 was disastrously worse at programming than previous releases, but told the board that everything was fine, and that it had all checked out as being perfect… that could also count as “lack of candidness”. He would have to have done this repeatedly for the board to swoop in; some sort of “this is the final straw” after last week’s announcements.

    That all said, OpenAI’s board does have a wider remit: they are also have a reponsibility around safe-guarding the use of AI and a few other things like that. So if OpenAI had actually built an AGI and Sam Altman was lying about it, they would also have a responsibility to fire him.

    In every example I can think of, this announcement means that Sam Altman has been lying about something. In my experience, dishonest executive leadership almost always slows research and development down; so OpenAI has been succeeded despite Sam Altman’s leadership, rather than because of it. So I predict that AI research speeds up even more now. (Uh oh.)