It’s no secret that many language models and fine-tunes are trained using datasets, many of them are made using GPT models. The problem arises when many “GPT-isms” end up in the dataset. And I am not only referring to the typical expressions like “however, it’s important to…”, “I understand your desire to…”, but I am also referring to the structure of the outputs in the model’s responses. ChatGPT (GPT models in general) tend to have a very predictable structure when in its “soulless assistant” mode, which makes it very easy to say “this is very GPT-like”.

What do you think about this? Oh, and by the way, forgive my English.

  • stereoplegic@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m more concerned with the community’s outsized reliance on/promotion of OAI-generated datasets and models trained on them. But then, commercial viability isn’t generally a concern when you want a spicy waifu.