I tried to apply a lot of prompting techniques in 7b and 13b models. And no matter how hard I tried, there was barely any improvement.
I tried to apply a lot of prompting techniques in 7b and 13b models. And no matter how hard I tried, there was barely any improvement.
At present the simpler and narrower the scope of the instruction the better. They cannot understand complex tasks. Extrapolate how the model thinks in general, then focus on one thing in particular and arrange your prompt accordingly.