PS. This is Text from Bing AI.

  • stddealer@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Putting “safety” mechanism in foundational models is dumb imo. They are not just text generators, they are statistical models about human languages, and it shouldn’t have made up arbitrary biases about what language should look like.

    • api@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s not hard to fine tune base models for any bias you want. “Zero bias” isn’t possible. There’s always some bias in the training data.