I tried to apply a lot of prompting techniques in 7b and 13b models. And no matter how hard I tried, there was barely any improvement.
I tried to apply a lot of prompting techniques in 7b and 13b models. And no matter how hard I tried, there was barely any improvement.
Models coming from Mistral and small models fine tuned in qa or instructions, need specific instructions in question format. For example: Prompt 1. ,“Extract the name of the actor mentioned in the article below” This prompt may not have the spected results. Now if you change it to: Prompt: What’s the name of the actor actor mentioned in the article below ? You’ll get better results. So yes, prompt engeniring it’s important I small models.
I wouldn’t really consider rephrasing a question prompt engineering. but yes the way in which the model was trained will dictate the way u ask it questions and if u don’t follow the proper format the less likely u will get a response that u want.
well I believe that at its core is the process where you guide generative artificial intelligence (generative AI) solutions to generate desired output. So iteration over rephrase, prompt versioning, and of course using the proper format is essential. I’m testing some new software architectures using 3 Instances of Mistral with different tasks using output from one as input for the other and boy, Mistral is amazing.