• drplan@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    What now? Another couple megawatthours spent on training mostly same datasets in mostly on the same architecture? I mean: I love LLMs and open source, but reinventing the wheel 100 times and spending so much energy for redundant results is somewhat pointless. There should be a community effort to achieve the best and lasting models.