As said in the title I’m curious if grokking has been proven to happen with llm, could it be the case with gpt-4?

  • yannbouteiller@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Am I correct to say that “grokking” is apparently an effect of regularization, as in reaching good generalization performance from pushing the weights to be as small as possible until the model reaches a capacity that is smaller than the dataset?