As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.

What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    Even more problematic are entire communities made out of astroturfing bots. This kind of stuff is increasingly easy and cheap to set up and will fool most people looking for advise online.

    • drkt@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      I am convinced that the bidet shills on reddit are bots. There’s just no way that hundreds of thousands of people are suddenly interested in shitting appliances.

    • Danterious@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 months ago

      Maybe we should look for ways of tracking coordinated behaviour. Like a definition I’ve heard for social media propaganda is “coordinated inauthentic behaviour” and while I don’t think it’s possible to determine if a user is being authentic or not, it should be possible to see if there is consistent behaviour between different kind of users and what they are coordinating on.

      Edit: Because all bots do have purpose eventually and that should be visible.

      Edit2: Eww realized the term came from Meta. If someone has a better term I will use that instead.

      Anti Commercial-AI license (CC BY-NC-SA 4.0)