I want to use LLMs to automate analysing data and use it to provide insights to my users, but often times I notice insights being generated on factually incorrect data. I tried fine tuning my prompts, the structure in which I pass data to LLM, few shot learning but there still some chance of it to hallucinate. How can I create a production ready application where this insights are surfaced to end users and presenting incorrect insights is not accepted? I am out of ideas. Any guidance is appreciated 🙏🏻

  • Seankala@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The fact that this is actually getting upvoted is really a sign about what’s happened to this community.