" we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation"
Big if true, we wouldn’t need to buy 3090 cards anymore to get sufficiant memory, just buying more RAM would suffice
" we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation"
Big if true, we wouldn’t need to buy 3090 cards anymore to get sufficiant memory, just buying more RAM would suffice
The improvement is so small it can be a margin of error
“Close to GPT4” is as true as “Me, Close to Usain bolt in the 100m dash” lol
Local models aren’t censored lol
So that means that we can get even better finetunes in the future? Noice!
I’m getting tired of all those merges, as if this was the magical solution to everything
I know right, getting that much investment on something you can easily cheat makes me sick
Llama2 has been pre-trained on old data (before the chatGPT AI poisoning was significant)
https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md
“Data Freshness The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.”
“Model Dates Llama 2 was trained between January 2023 and July 2023.”
StableLM3b has been trained on more recent datasets (cutoff of march 2023) yet it doesn’t have this amount of chatgpt poisoning in it
https://huggingface.co/stabilityai/stablelm-base-alpha-3b-v2
“llama2 7b > llama2 13b”
lol
please make a 13b model…
If the US keeps going full woke and are too afraid to work as hard as possible on the LLM ecosystem, China won’t wait twice before winning this battle (which is basically the 21th century battle in terms of technology)
Feels sad to see the US decline like that…