Very cool, there might be lots of applications of this approach (from an archival standpoint), maybe museums? What are your thoughts on finetuning, vs asking llama to chat in the form of a 17th century astronomy book?
Well that was actually my original motivation for finetuning. Even GPT-4 is not so good with a proper prompt: the text feels fake and/or struggle to maintain cultural consistency. I think finetuning works better for this task, as there are too many directives to give and it helps to relieve the model from anachronistic RLHF.
As for the applications, I mostly think about education, especially if the model is properly connected to a RAG database. Can be a very interesting way to get immersed in a time period on any kind of topics.
Would be awesome in classroom. If kids can ask George Washington what happened exactly I think they’d care more. Plus they could tell him to go f himself for infinite amusement
Very cool, there might be lots of applications of this approach (from an archival standpoint), maybe museums? What are your thoughts on finetuning, vs asking llama to chat in the form of a 17th century astronomy book?
Well that was actually my original motivation for finetuning. Even GPT-4 is not so good with a proper prompt: the text feels fake and/or struggle to maintain cultural consistency. I think finetuning works better for this task, as there are too many directives to give and it helps to relieve the model from anachronistic RLHF.
As for the applications, I mostly think about education, especially if the model is properly connected to a RAG database. Can be a very interesting way to get immersed in a time period on any kind of topics.
Would be awesome in classroom. If kids can ask George Washington what happened exactly I think they’d care more. Plus they could tell him to go f himself for infinite amusement