• Dorialexandre@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Well that was actually my original motivation for finetuning. Even GPT-4 is not so good with a proper prompt: the text feels fake and/or struggle to maintain cultural consistency. I think finetuning works better for this task, as there are too many directives to give and it helps to relieve the model from anachronistic RLHF.

    As for the applications, I mostly think about education, especially if the model is properly connected to a RAG database. Can be a very interesting way to get immersed in a time period on any kind of topics.