• 1.8K Posts
  • 2.73K Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle

  • rglullisAtoFediverse@lemmy.worldConverser.eu is being flooded
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 hours ago

    they seem to want this service to stay free and open

    TANSTAAFL.

    The admin might think they are being this generosity is good for the users, but at the end of the day all it just gets them burned out and gives people who signed up the impression that all matrix servers are slow. Meanwhile, acess to my matrix server is not free, ($29/year, less than $2.50/month) but by charning just a little bit I can make sure that it grows at a rate that I can manage and doesn’t make my infrastructure implode.








  • rglullisAtoFediverse@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    18
    ·
    23 days ago

    He likes the process of working on greenfield ideas and gets bored once he ships the MVP, which is fine. What is not fine is that he makes a ton of hype around new projects, but after he gets tired of playing with it, he refuses to let go. It sucks the air out of the community for very little benefit.

    I was hoping that the PixelFed kickstarter would force him to finally focus on the damn thing, but it seems he simply does not have the drive or interest to work at a steady pace in one single product.






  • Once you achieve any kind of scale, whoever your client is querying to get the book data for those kinds of queries is going to block you

    You know that the whole of wikidata can be copied with just a few hundreds of GBs, right? There are plenty of examples of community-driven data providers (especially in the *arr space), so I can bet that there would be more people setting up RDF data servers (which is mostly read-heavy, public data sharing) than people willing to set up their Mastodon/Lemmy/GoToSocial server - because that involves replicating data from everyone else, dealing with network partitions, etc…

    Also, there are countless ways to make this less dependent on any big server, the client could pull specific subsets of the data and cache data locally so the more they are used the less they would need to fetch remote resources.

    Think of it like this: a client-first application that understands linked data would be no different than a traditional web browser, but the main difference is that the client would only use json-ld and not HTML.




  • Or are all of the books objects stored on activitypub and I get the data from the social graph itself?

    Not “stored on activitypub”, but each book could be represented with RDF (it could be something as sophisticated as using DublinCore or as simple as just using isbns to uniquely identity the books (urn:isbn:1234556789) , and then each activity for “CombatWombatEsq read a book” would be an activity where you are the actor and the book is the object. Then it would be up to the client to expand that information. Your client app could take the ISBN and query wikidata, or Amazon, or nothing at all…






  • It stores the complete data for any given user post in its databases

    That is not fully correct. The index the data from the different personal data servers, and they host the largest personal data server out there, but you can have your own PDS and interact with other Bluesky users without having to rely on their data.

    This means each one has its own data model, internal storage architecture, and streams/APIs.

    Yeah, but why? ActivityPub already provides the “data model” and the API. Internal storage is an implementation detail. Why do we continue to accept this idea that each different mode of interaction with the social graph requires an entirely separate server?

    Because they were built for different purposes, they support different features

    Like OP said, on bluesky is possible to have different “shells” that interact with the network. Why wouldn’t that be possible on ActivityPub?