I’ve been using self-hosted LLM models for roleplay purposes. But these are the worst problems I face every time, no matter what model and parameter preset I use.

I’m using :

Pygmalion 13B AWQ

Mistral 7B AWQ

SynthIA 13B AWQ [Favourite]

WizardLM 7B AWQ

  1. It messes up with who’s who. Often starts to behave like the user.

  2. It writes in third person perspective or Narrative.

  3. Sometimes, generates the exact same reply (exactly same to same text) back to back even though new inputs were given.

  4. It starts to generate more of a dialogue or screenplay script instead of creating a normal conversation.

Anyone has any solutions for these?

  • Gnodax@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    That all sounds like the typical symptoms when you feed too much generated content back into the context buffer. Limit the dynamic part of your context buffer to about 1k tokens. At least that’s been my experience using 13B models as chatbots. With exllama you just add “-l 1280”. Other systems should offer similar functionality.

    If you want to get fancy, you can fill the rest of the context with whatever backstory you want.