So let’s say we ask an LLM to predict what would happen if we put a pen on the table. And it simulates a thousand possibilities. Is there an LLM that would run perpendicular to these outputs as a sort of summarizer/filter. Is there a project working on anything like this?
Been looking, not finding. Thanks!
High temperature + batch inference on the same prompt
Yeah I agree this is decent, there can be another layer of prompt with some logic to help it along more particular paths as well in combo with temp adjustment. But that’s the parallel processing, what about the perpendicular? Just another layer of LLM taking in all the answers and choosing the best? I’m hoping for something a bit more integrated than that. Any ideas?