High repetition penalty? One model I merged suddenly started speaking Spanish in one summarisation task lol
High repetition penalty? One model I merged suddenly started speaking Spanish in one summarisation task lol
Really cool, will check the video out. Since we found an actually qualified person though, let me ask a few layman questions, hope you have time to answer them!
sampling methods. Most of them look simple, but we still don’t really know how to tune them. Do you think novel sampling methods or specific combinations could improve output quality by a lot?
For instance, beam search. Does beam search provide a linear improvement in quality as you go up or not?
Do you think ideal numbers for temperature, top_k and top_p are context or model based, or both?
Exactly what I was thinking. I just fail miserably each time I merge the layers.
Any tips/attempts on frankensteining 2 yi-34b models together to make a ~51B model?
That’s acceptable. Did you full train or fine-tune though? And how much data?
Goddammit, I just fine-tuned Tortoise with custom voice. Can’t wait for webui’s for the StyleTTS. Hope it’s easy to fine-tune
CodeBooga.
Sure, it’s just going to generate 5 tokens per week
No fucking way. GPT-3 has 175B params. In no shape or form they could have discovered the “secret sauce” to make an ultra smart 20B model. TruthfulQA paper suggests that bigger models are more likely to score worse, and ChatGPT’s TQA score is impressively bad. I think the papers responsible for impressive open-source models are max 12-20 months old. Turbo version is probably quantized, that’s all.
As I understand LLMs basically write the average pattern of a billion books, so when you add gpt-4 and 3.5 data into the mix, which averages the average, things get boring very fast. For model suggestion, Yi-34b based ones look fine for literary purposes.
I think being very specific and editing (co-writing with the model) could help. Some LoRA training on specific books could be helpful to mimic a certain style.
High temperature and repetition penalty could help too.