• Shoddy_Vegetable_115@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Exactly. It didn’t hallucinate even once in my tests. I used RAG and it gave me perfect to-the-point answers. But I know most people want more verbose outputs it’s just that it’s good for factual retrieval use cases.

    • julylu@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Maybe for RAG, short answer is less possible for hallucination?I will test more. thanks

    • Intel@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This is a fine-tuned/instruction-tuned model. Explicit system prompts or instructions like “generate a long, detailed answer” can make the model generate longer responses. 🙂

      –Kaokao, AI SW Engineer @ Intel