I’ve been using self-hosted LLM models for roleplay purposes. But these are the worst problems I face every time, no matter what model and parameter preset I use.

I’m using :

Pygmalion 13B AWQ

Mistral 7B AWQ

SynthIA 13B AWQ [Favourite]

WizardLM 7B AWQ

  1. It messes up with who’s who. Often starts to behave like the user.

  2. It writes in third person perspective or Narrative.

  3. Sometimes, generates the exact same reply (exactly same to same text) back to back even though new inputs were given.

  4. It starts to generate more of a dialogue or screenplay script instead of creating a normal conversation.

Anyone has any solutions for these?

  • Ravenpest@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    7b is waay too dumb to be able to roleplay right now. 13b is the bare minimum for that specific task.

    • Susp-icious_-31User@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      The exceptions I’d make are OpenHermes 2.5 7b and OpenChat 3.5 7b, both pretty good Mistral finetunes. I’d use them over a lot of 13b But are they approaching the level of the 34/70b? No, you can easily tell, but they’re not stupidly dumb anymore.