I want to use LLMs to automate analysing data and use it to provide insights to my users, but often times I notice insights being generated on factually incorrect data. I tried fine tuning my prompts, the structure in which I pass data to LLM, few shot learning but there still some chance of it to hallucinate. How can I create a production ready application where this insights are surfaced to end users and presenting incorrect insights is not accepted? I am out of ideas. Any guidance is appreciated 🙏🏻
The short answer is: you can’t. If you want a reliable system that never hallucinates, use rules/templates. It’s also easier to maintain. Ehud Reiter has written extensively about this.
Make sense