• buzzyness@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Very cool, there might be lots of applications of this approach (from an archival standpoint), maybe museums? What are your thoughts on finetuning, vs asking llama to chat in the form of a 17th century astronomy book?

    • Dorialexandre@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Well that was actually my original motivation for finetuning. Even GPT-4 is not so good with a proper prompt: the text feels fake and/or struggle to maintain cultural consistency. I think finetuning works better for this task, as there are too many directives to give and it helps to relieve the model from anachronistic RLHF.

      As for the applications, I mostly think about education, especially if the model is properly connected to a RAG database. Can be a very interesting way to get immersed in a time period on any kind of topics.

    • unamednational@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Would be awesome in classroom. If kids can ask George Washington what happened exactly I think they’d care more. Plus they could tell him to go f himself for infinite amusement