As requested, this is the subreddit’s second megathread for model discussion. This thread will now be hosted at least once a month to keep the discussion updated and help reduce identical posts.

I also saw that we hit 80,000 members recently! Thanks to every member for joining and making this happen.


Welcome to the r/LocalLLaMA Models Megathread

What models are you currently using and why? Do you use 7B, 13B, 33B, 34B, or 70B? Share any and all recommendations you have!

Examples of popular categories:

  • Assistant chatting

  • Chatting

  • Coding

  • Language-specific

  • Misc. professional use

  • Role-playing

  • Storytelling

  • Visual instruction


Have feedback or suggestions for other discussion topics? All suggestions are appreciated and can be sent to modmail.

^(P.S. LocalLLaMA is looking for someone who can manage Discord. If you have experience modding Discord servers, your help would be welcome. Send a message if interested.)


Previous Thread | New Models

  • USM-Valor@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    13B and 20B Noromaid for RP/ERP.

    I am experimenting with comparing GGUF to EXL2 as well as stretching context. So far, Noromaid 13b at GGUF Q5_K_M stretches to 12k context on a 3090 without issues. Noromaid 20B at Q3_K_M stretches to 8k without issues and is in my opinion superior to the 13B. I have recently stretched Noromaid 20B to 10k using 4bpw EXL2 and it is giving coherent responses. I haven’t used it enough to assess the quality however.

    All this is to say, if you enjoy roleplay you should be giving Noromaid a look.