I write ̶b̶u̶g̶s̶ features, show off my adorable standard issue cat, and give a shit about people and stuff. I’m also @CoderKat.

  • 1 Post
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • Naw, I still use Google. With an ad blocker, I find it to provide the best results by far (though the ad blocker is important, because they get misleading ads sometimes). It’s superior when searching for descriptions (e.g., you can’t remember a movie title and have to describe it) and local results. Plus I use Maps heavily (it’s superior to its competitors) and that integrates into Google.

    I just frankly don’t care that much about tracking my searches or the likes. I see it as the cost of getting a quality product for free. The only reason I even have the ad blocker is frankly because their ads are terrible. They don’t do enough to curate their ads, so scams sometimes slip in. I also think it’s very scummy that you can search, e.g., “pizza hut” and get an ad for Dominos above the Pizza Hut result.


  • The whole CSAM issue is why I’d never personally run an instance, nor any other kind of server that allows users to upload content. It’s an issue I have no desire to have to deal with moderating nor the legal risks of the content even existing on a server I control.

    While I’d like to hope that law enforcement would be reasonable and understand “oh, you’re just some small time host, just delete that stuff and you’re good”, my opinion on law enforcement is in the gutter. I wouldn’t trust law enforcement not to throw the book at me if someone did upload illegal content (or if I didn’t handle it correctly). Safest to let someone else deal with that risk.

    And even if you can win some case in court, just having to go to court can be ludicrously expensive and risk high impact negative press.






  • Strongly agreed. I think a lot of commenters in this thread are getting derailed by their feelings towards Meta. This is truly a dumb, dumb law and it’s extremely embarrassing that it even passed.

    It’s not just Meta. No company wants to comply with this poorly thought out law, written by people who apparently have no idea how the internet works.

    I think most of the people in the comments cheering this on haven’t read the bill. It requires them to pay news sites to link to the news site. Which is utterly insane. Linking to news sites is a win win. It means Facebook or Google gets to show relevant content and the news site gets users. This bill is going to hurt Canadian news sites because sites like Google and Facebook will avoid linking to them.


  • Barriers are relative. Everything that makes it slightly harder will stop a large chunk of bots, since bots aren’t able to easily adapt like humans can. Plenty of very basic bots are in fact stopped by lack of emails.

    But yeah, email verification is heavily more so that you can verify they have access to the email, and thus the email is safe to use for things like password resetting. Without it, webmasters can get swamped with complaints about people getting locked out of accounts or the likes because they signed up with the wrong email.

    In theory, you can also go further by only allowing email providers that have anti bot mechanisms, but it’s difficult to maintain that and it will always exclude some legitimate users.


  • You can’t aggregate them internally, anyway. You need to be able to know if someone already voted on something.

    I think activitypub needs to be extended so that the likes and reduces only need to be sent to the host of the content, with federation then being told just the aggregate number. Then the only servers that need to know identity of votes are the host server (necessary to ensure nobody can multi vote) and optionally the server the user voted on (could just relay the information to the host server and not store it locally, but then it’d be harder to tell what you’ve already upvoted – could use local storage but I think lots of people use social media on multiple devices).


  • Sometimes reporting technically covers the last one. But usually not. Not all subs have rules against bigotry, trolling, dog whistles, general assholery, etc. I strongly hold it’s important that downvoting is an option to deal with these kinda things. It’s a way to show everyone that the comment isn’t acceptable.

    Plus even when reporting is an option, it may not be fast enough. Can’t really automate removals, either, as people will abuse that.

    Arguably “disagree but acceptable” should just not upvote. In a certain sense, that’s already a middle option.



  • “has anyone from my server interacted or searched for the post by it’s URL” is misleading. I struggled with this yesterday. Turns out you have to search in a very specific way.

    In both kbin and Lemmy, you can’t just go to the community’s URL (which is utterly bizarre). You must search the full magazine name. In Lemmy, you weirdly need the ! in front when searching it to find it. In kbin, you don’t need that, but you do need to search the magazine in the “neutral” search mode, not magazine search mode (lol wut?). Actually, in Lemmy you also have to use the “normal” search field and not the community search field.

    And of course, both have a discovery issue. People want to be able to search a partial string like “hobby” without having to know what instance their community might be on or if the full name might be things like “hobby_discuss”, etc. They should not need a separate tool to do this search. That’s just a barrier to entry.

    Anyway the whole thing is a usability barrier that needs to change. It also makes smaller instances actively harder to use, which is a bad incentive. We don’t want people to experience small instances as “buggy” (even if it’s working as intended).

    Anyone currently trying to create a sub should have an account on every major instance and subscribe to their new sub to ensure it shows up in the search. And yes, that is just completely silly (and unscalable beyond the biggest instances).


  • I don’t think GDPR necessarily applies here, but I am not a lawyer. Quoting https://gdpr.eu/companies-outside-of-europe/:

    Article 3.1 states that the GDPR applies to organizations that are based in the EU even if the data are being stored or used outside of the EU. Article 3.2 goes even further and applies the law to organizations that are not in the EU if two conditions are met: the organization offers goods or services to people in the EU, or the organization monitors their online behavior. (Article 3.3 refers to more unusual scenarios, such as in EU embassies.)

    I’m not sure just what the definition of an organization is, so perhaps any server hosted within the EU is covered by the GDPR, but for servers outside of the EU that don’t have ads (which seems like all servers currently), I don’t think this would count. The example on the linked site about “goods and services” includes stuff like looking for ads tailored at European countries, so I suspect that simply serving traffic from Europe isn’t enough.

    The website also mentions the GDPR applies to “professional or commercial activity”. There’s also apparently an exception for under 250 employees. I don’t even know how that works when something is entirely managed by volunteers like this currently is.

    At any rate, I suspect we’re a long way off from having to worry about the GDPR.


  • Honestly, I kinda question how good of a time investment it is to try and allow deletion from the public facing parts of the internet, given the numerous places where your content will be cached or otherwise stored.

    There is certainly some value in simply making it as hard as possible to find things you want to delete. Why let perfect be the enemy of good, after all. There’s plenty of types of content we certainly want to do our best at deleting even if we can’t be perfect. Eg, do you wanna be the one to tell a revenge porn victim, “sorry, we can’t make it harder to find the content that harms you because we can’t delete all of it anyway”?

    But at the same time, development time is limited. Everything is a trade off. We do have to decide what is most important, because we can’t do it all immediately. The fact we can’t actually delete everything does have to be a factor in this prioritization, too.

    There is something to be said about ensuring people know and understand that nothing can truly be 100% deleted once it’s posted on the internet. Not that Lemmy is doing good about that, either (especially since deleted comments apparently lie about being deleted).

    All this said, I do think federated, reliable deletion is critical for illegal content. Such content needs to be removed quickly and easily from as many places as possible. Without this, instance owners are put at considerable legal risk. This risk poses a threat to the scalability of the Fediverse.


  • Agreed. I don’t see the point in trying to ban something before it exists and before we even know anything about how it would work. I get it, Meta has done some shit. But on the other hand, having such a big player in the Fediverse could be huge for its growth, especially since the Fediverse has a serious UX issue and UX is Meta’s strength.

    I don’t really understand the privacy concerns. Just don’t use their instances? Have y’all seen how the Fediverse already works? Stuff like your votes are already public and that can’t be easily changed. And a nifty thing is that if Meta makes a product for the Fediverse that is federated, it’s just as easy for its users to migrate to another Fediverse platform if we find out Meta pulls some shit.