I’m struggling to get the 7b models to do something useful, obviously I’m doing something wrong as it appears many people strive for 7b models.

But myself I can not get them to follow instructions, they keep repeating stuff and occasionally they start to converse with themselves.

Does anyone have any pointers what I’m doing wrong?

  • VertexMachine@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I’m seconding that. I’m actually amazed by how it performs, frequently getting similar or better answers than bigger models. I start to think that we do lose a lot with quantization from the bigger models…