What now? Another couple megawatthours spent on training mostly same datasets in mostly on the same architecture? I mean: I love LLMs and open source, but reinventing the wheel 100 times and spending so much energy for redundant results is somewhat pointless. There should be a community effort to achieve the best and lasting models.
What now? Another couple megawatthours spent on training mostly same datasets in mostly on the same architecture? I mean: I love LLMs and open source, but reinventing the wheel 100 times and spending so much energy for redundant results is somewhat pointless. There should be a community effort to achieve the best and lasting models.