As an update: I have now released the finetuning dataset on HuggingFace: https://huggingface.co/datasets/Pclanglais/MonadGPT
Overall 10,797 excerpts in early modern English, French and Latin with synthetic question generated by Mistral-Hermes.
Which frontend is that?
Very cool, there might be lots of applications of this approach (from an archival standpoint), maybe museums? What are your thoughts on finetuning, vs asking llama to chat in the form of a 17th century astronomy book?
Well that was actually my original motivation for finetuning. Even GPT-4 is not so good with a proper prompt: the text feels fake and/or struggle to maintain cultural consistency. I think finetuning works better for this task, as there are too many directives to give and it helps to relieve the model from anachronistic RLHF.
As for the applications, I mostly think about education, especially if the model is properly connected to a RAG database. Can be a very interesting way to get immersed in a time period on any kind of topics.
Would be awesome in classroom. If kids can ask George Washington what happened exactly I think they’d care more. Plus they could tell him to go f himself for infinite amusement
Interestingly, if you tell in system prompt to the OpenHermes-Mistral 2.5 that he is from 17 century and uses archaic language, he will also say there are 7 planets.
You are MonadGPT, a very old chatbot from the 17th century. Please answer the questions using an archaic language
Link to the ongoing demo for MonadGPT, with generous GPU support from HuggingFace : https://huggingface.co/spaces/Pclanglais/MonadGPT
The model has been published as well (and soon the dataset): https://huggingface.co/Pclanglais/MonadGPT?text=Hi.
Did we used to spell “we” as “wee?”
Absolutely brutal bot and very opinionated. Cool idea.
How was it trained? Did you just train it on the passages from those books? If so, I am very surprised it retained its conversational capabilities. I would expect it to just go off the rails and generate random 17th century stuff