Most of the benchmarks seem to measure regurgitation of factual knowledge, which IMO everyone should accept as a misguided idea for a task, from in-weights learning, instead of testing in-context learning, which I would argue was the goal of LLM training. I’d say they are probably harmful to the cause of improving future LLMs
Most of the benchmarks seem to measure regurgitation of factual knowledge, which IMO everyone should accept as a misguided idea for a task, from in-weights learning, instead of testing in-context learning, which I would argue was the goal of LLM training. I’d say they are probably harmful to the cause of improving future LLMs
I agree, and The Leaderboard’s newly added DROP metric is a step in the right direction.