The problem:

The web has obviously reached a high level of #enshitification. Paywalls, exclusive walled gardens, #Cloudflare, popups, CAPTCHAs, tor-blockades, dark patterns (esp. w/cookies), javascript that makes the website an app (not a doc), etc.

Status quo solution (failure):

#Lemmy & the #threadiverse were designed to inherently trust humans to only post links to non-shit websites, and to only upvote content that has no links or links to non-shit venues.

It’s not working. The social approach is a systemic failure.

The fix:

  • stage 1 (metrics collection): There needs to be shitification metrics for every link. Readers should be able to click a “this link is shit” button on a per-link basis & there should be tick boxes to indicate the particular variety of shit that it is.

  • stage 2 (metrics usage): If many links with the same hostname show a pattern of matching enshitification factors, the Lemmy server should automatically tag all those links with a warning of some kind (e.g. ⚠, 💩, 🌩).

  • stage 3 (inclusive alternative): A replacement link to a mirror is offered. E.g. youtube → (non-CF’d invidious instance), cloudflare → archive.org, medium.com → (random scribe.rip instance), etc.

  • stage 4 (onsite archive): good samaritans and over-achievers should have the option to provide the full text for a given link so others can read the article without even fighting the site.

  • stage 5 (search reranking): whenever a human post a link and talks about it, search crawlers notice and give that site a high ranking. This is why search results have gotten lousy – because the social approach has failed. Humans will post bad links. So links with a high enshitification score need to be obfuscated in some way (e.g. dots become asterisks) so search crawlers don’t overrate them going forward.

This needs to be recognized as a #LemmyBug.

  • rglullisA
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    The metrics need to be centralized.

    Why? And how would guarantee the integrity of the ones holding the metrics?

    this imposes an inefficient amount of effort & expertise on the end-user.

    A lot less effort than having to deal with the different “features” that each website admin decides to run on their own.

    • activistPnk@slrpnk.netOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      5
      ·
      edit-2
      1 year ago

      Why?

      1. It’s a big database. It would be a poor design to replicate a db of all links in every single client.
      2. Synchronization of the db would not be cheap. When Bob says link X has anti-feature Y, that information must then be shared with 10s of thousands of other users.

      Perhaps you have a more absolute idea of centralized. With Mastodon votes, they are centralized on each node but of course overall that’s actually decentralized. My bad. I probably shouldn’t have said centralized. I meant more centralized than a client-by-client basis. It’d be early to pin those details down at this point other than to say it’s crazy for each client to maintain a separate copy of that DB.

      And how would guarantee the integrity of the ones holding the metrics?

      The server is much better equipped than the user for that. The guarantee would be the same guarantee that you have with Mastodon votes. Good enough to be fit for purpose. For any given Mastodon poll everyone sees a subset of votes. But that’s fine. Perfection is not critical here. You wouldn’t want it to decide a general election, but you don’t need that level of integrity.

      A lot less effort than having to deal with the different “features” that each website admin decides to run on their own.

      That doesn’t make sense. Either one person upgrades their Lemmy server, or thousands of people have to install, configure, and maintain a dozen different browser plugins ported to a variety of different browsers (nearly impossible enough to call impossible). Then every Lemmy client also has to replicate that complexity.