I see a lot – by no means an overabundance, but enough to “trigger” me – of laughing at some of the “obvious” research that gets posted here.
One example from a week or two ago that’s been rattling around in my head was someone saying in reply to the paper (paraphrased):
That’s just RAG with extra steps.
Exactly. But what were those steps attempting? Did it make RAG better?
Yes. Great, let’s continue pulling the thread.
No. Ok, let’s let others know that pulling this thread in this direction has been tried, and they should take a different approach; maybe it can be pulled in a different direction.
We are at the cusp of a shift in our cultural and technical cultures. Let’s not shame the people sharing their work with the community.
That is how science is supposed to work. Unfortunately, even in the academia, there is less and less reward in publishing negative results.
Maybe we will someday have an AI publisher that encourages negative results? I think one of the promising things about AI is removing the human tendency to value things that enhance careers. Be it journalistic credits or a pay raise, an AI can be much more objective.
Just to point out the obvious, but I think it gets lost in the weeds sometimes:
When you say “Maybe we will someday have an AI publisher”, that is still a person or company with a computer running a program.
So it will still be researchers researching, but the tools they use will help create and value the negative more than those results were valued historically.
My opinion is that this distinction needs to be made clear from time-to-time, so people learning will understand that AI isn’t a mythological creature we’re attempting to tame. It’s a new “programming” paradigm that we are trying to understand and utilize to improve our workloads/workflow.