minus-squareOk_Relationship_9879@alien.topBtoLocalLLaMA@poweruser.forum•🐺🐦⬛ LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4linkfedilinkEnglisharrow-up1·10 months agoThat’s pretty amazing. Thanks for all your hard work! Does anyone know if the Nous Capybara 34B is uncensored? linkfedilink
Ok_Relationship_9879@alien.topB to LocalLLaMA@poweruser.forumEnglish · 11 months agoGPT-4's 128K context window testedplus-squaremessage-squaremessage-square6fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareGPT-4's 128K context window testedplus-squareOk_Relationship_9879@alien.topB to LocalLLaMA@poweruser.forumEnglish · 11 months agomessage-square6fedilink
minus-squareOk_Relationship_9879@alien.topBtoLocalLLaMA@poweruser.forum•For roleplay purposes, Goliath-120b is absolutely thrilling melinkfedilinkEnglisharrow-up1·11 months agoWhich models do you find to be good at 16k context for story writing? linkfedilink
minus-squareOk_Relationship_9879@alien.topBtoLocalLLaMA@poweruser.forum•RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language ModelslinkfedilinkEnglisharrow-up1·11 months agoIf chinchilla is right, this dataset could be huge for small models. https://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications linkfedilink
That’s pretty amazing. Thanks for all your hard work!
Does anyone know if the Nous Capybara 34B is uncensored?