According to this tweet,

when gpt4 first finished training it didn’t actually work very well and the whole team thought it’s over, scaling is dead…until greg went into a cave for weeks and somehow magically made it work

So gpt-4 was kind of broken at first. Then greg spent a few weeks trying to fix it and then it somehow worked.

So why did it not work at first and how did they fix it?
I think this is an important question to the OSS community,

    • dogesator@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Predicting the loss is very different from predicting real world abilities, they are able to top the former, not the latter.

      Predicting the future loss once you’re already 10% into training is fairly trivial. Predicting the actual abilities though is not.