• dustyData@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    I get the hype because it is a big deal in machine learning. However, it’s a gross over reaction. This people are high on their own farts.

    • stifle867@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Oh yeah it’s undoubtedly an advancement but as you said, to fire someone over that is an overreaction and that’s putting it generously.

      I understand taking some form of action if there was evidence of something potentially catastrophic, or at least 1 step removed from that. Do they have public guidelines on where they draw the line? What’s the procedure that says “at this point we pull the plug?”. I highly doubt it says “when it can do basic math”.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 year ago

        Do they have public guidelines on where they draw the line?

        This is part of the problem. The non-profit supervising a for-profit model clearly doesn’t work. Altman either intentionally or opportunistically took this chance to essentially coup his own board. The other side of the board’s supposed reasons to oust him was that there are no guidelines, there are no safe-guards, they’re all flying by the seats of their pants and being reckless and destructive all around. Many AI companies are already starting to see lawsuits due to unforeseen, to them, damages.

      • tinkeringidiot@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        An overreaction by members of the board that wanted to keep AI development slow and “safe”. Sudden news that there was a major advancement toward AGI (which they believe will destroy humanity, there’s a seriously a whole cult around this in AI research circles right now) that they hadn’t been told about sent them off the deep end. Those board members thought they could fire Altman and throw the brakes on, not anticipating that 700 employees would side against them and potentially migrate to Microsoft where the “AI ethics” would have no influence at all.

        They shot their shot and lost massively, for themselves and their fellow believers. That attitude toward AI is now being labeled a business liability in the minds of every decision maker in the whole AI world.