Can LLMs stack more layers than the largest ones currently have, or is it bottlenecked? Is it because the gradients can’t propagate properly to the beginning of the network? Because inference would be to slow?

If anyone could provide a paper that talks about layer stacking scaling I would love to read it!

  • iantimmis@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    It becomes very expensive compute-wise, but where we are actually running up to the edge is the scale of the data. They’ve discovered “scaling laws” (see chinchilla paper) that determines how big your model should be given the amount of data you have. We could go bigger but there’s no reason to use a multi-trillion parameter model for example because it’s just wasted capacity.