I want to use LLMs to automate analysing data and use it to provide insights to my users, but often times I notice insights being generated on factually incorrect data. I tried fine tuning my prompts, the structure in which I pass data to LLM, few shot learning but there still some chance of it to hallucinate. How can I create a production ready application where this insights are surfaced to end users and presenting incorrect insights is not accepted? I am out of ideas. Any guidance is appreciated 🙏🏻

  • EvM@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The short answer is: you can’t. If you want a reliable system that never hallucinates, use rules/templates. It’s also easier to maintain. Ehud Reiter has written extensively about this.

  • UndocumentedMartian@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    By not using LLMs to do the modelling. Use specialized models for data analysis and use an LLM to orchestrate those models and communicate with the user. LLMs are not cheap to run, though, so you may want to do a cost/benefit analysis.

    • software-n-erd@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Gotcha, I was honestly not aware of any data analysis models. Have you ever used any of them which you think I should look at?

  • Seankala@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The fact that this is actually getting upvoted is really a sign about what’s happened to this community.