Hi

I am a computer science student and am just starting my masters thesis. My focus will be on content moderation (algorithms) and therefore I am currently exploring how some social media applications moderate content.

If I understand the docs correctly, content moderation on mastodon is all manual labor? I haven’t read anything about automatic detection of Child Sexual Abuse Material (CSAM) for example which is a thing that most centralised platforms seem to do.

Another question which kind of goes in the same direction is reposting of already moderated content. For example a racist meme that was posted before. Are there any measures in place to detect this?

Thank you for your help!

  • indiealexh@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Maybe, but who’s paying? And why are they releasing it to the public for free?

    Companies want to make money. Individuals want to spend as little as they can.

    Many Companies are also more recently pushing back against opensource because they can’t license how they want to to maximize profit. So there is a few companies who would pay for such a thing and all it opensource but not many and they have other priorities.

    Maybe bluesky or whatever will care enough…

    • WinteriscomingXii@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      What do you mean? There’s tools out there already. Lemmy had one made recently that helps with CSAM. Plus the Fedi is always touting donations as the way so why wouldn’t people donate if funds are needed?