According to this tweet,

when gpt4 first finished training it didn’t actually work very well and the whole team thought it’s over, scaling is dead…until greg went into a cave for weeks and somehow magically made it work

So gpt-4 was kind of broken at first. Then greg spent a few weeks trying to fix it and then it somehow worked.

So why did it not work at first and how did they fix it?
I think this is an important question to the OSS community,

  • maxinator80@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Sam Altman mentioned that GPT4 is actually super difficult to work with. So I guess it simply isn’t as straight forward as pushing in a prompt at the front and getting tokens out the back. Anything further would be speculation, but there must be something.

    • CosmosisQ@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      He’s just alluding to the fact that most enterprise customers are too stupid to use base models as they expect to be interacting with a human-like dialogue-driven agent or chatbot rather than a supercharged text completion engine. It’s a shame given that, used properly, the GPT-4 base model is far superior to the lobotomized version made generally available through the API.