Maybe we should make a dataset of top faculty reviewers and train a model on that dataset. Then that model can review papers. Unless there’s papers using the same model, in which case you need another model, and this model only reviews papers of the first model. The first model can review papers about the second model. Both models improve akin to stable GAN training. Then someone writes up this overall modeling and we enter a deeper layer of recursion.
Maybe we should make a dataset of top faculty reviewers and train a model on that dataset. Then that model can review papers. Unless there’s papers using the same model, in which case you need another model, and this model only reviews papers of the first model. The first model can review papers about the second model. Both models improve akin to stable GAN training. Then someone writes up this overall modeling and we enter a deeper layer of recursion.