If you’re interested in (co-)moderating any of the communities created by me, you’re welcome to message me.

I also have the account @Novocirab@jlai.lu. Furthermore, I own the account @daswetter@feddit.org, which I hope to make a small bot out of in the future.

  • 53 Posts
  • 174 Comments
Joined 9 months ago
cake
Cake day: February 27th, 2025

help-circle
  • And, most importantly, it’s about so much more than just the banners. For example:

    (1) A new GDPR loophole via “pseudonyms” or “IDs”. The Commission proposes to significantly narrow the definition of “personal data” – which would result in the GDPR not applying to many companies in various sectors. For example, sectors that currently operate via “pseudonyms” or random ID numbers, such as data brokers or the advertising industry, would not be (fully) covered anymore. This would done by adding a “subjective approach” in the text of the GDPR.

    Instead of having an objective definition of personal data (e.g. data that is linked to a directly or indirectly identifiable person), a subjective definition would mean that if a specific company claims that it cannot (yet) or does not aim to (currently) identify a person, the GDPR ceases to apply. Such a case-by-case decision is inherently more complex and everything but a “simplification”. It also means that data may be “personal” or not depending on the internal thinking of a company, or given the circumstances that they have at a current point. This can also make cooperation between companies more complex as some would fall under the GDPR and others not.

    (2) Pulling personal data from your device? So far, Article 5(3) ePrivacy has protected users against remote access of data stored on “terminal equipment”, such as PCs or smartphones. This is based on the right to protection of communications under Article 7 of the Charter of Fundamental Rights of the EU and made sure that companies cannot “remotely search” devices.

    The Commission now adds “white listed” processing operations for the access to terminal equipment, that would include “aggregated statistics” and “security purposes”. While the general direction of changes is understandable, the wording is extremely permissive and would also allow excessive “searches” on user devices for (tiny) security purposes.

    (3) AI Training of Meta or Google with EU’s Personal Data? When Meta or LinkedIn started using social media data, it was widely unpopular. In a recent study for example only 7% of Germans say that they want Meta to use their personal data to train AI. Nevertheless, the Commission now wants to allow the use of highly personal data (like the content of 15+ years of a social media profile) for AI training by Big Tech.







  • Interesting, thank you. From the text here,

    "As we collectively identify and validate slop across the web, Kagi’s SlopStop initiative will help transform those insights into a comprehensive, structured dataset of AI slop – an invaluable resource for training AI models.

    Access to the database will be shared soon. Use this form to express your interest if you’d like to receive updates.

    especially the part “an invaluable resource for training AI models”, and the absence of any community-focused language, and the fact that Kagi iirc is still operating at losses and are looking for ways to become profitable, I fear it will be essentially commercial. But who knows, and even if it will be so, that still is not to say it’s all bad.








  • Here’s why you’re getting enshittified: we deliberately decided to stop enforcing competition laws. As a result, companies formed monopolies and cartels. This means that they don’t have to worry about losing your business or labor to a competitor, because they don’t compete. It also means that they can handily capture their regulators, because they can easily agree on a set of policy priorities and use the billions they’ve amassed by not competing to capture their regulators. They can hold a whip hand over their formerly powerful tech workers, mass-firing them and terrorizing them out of any Tron-inspired conceits about “fighting for the user.” Finally, they can use IP law to shut down anyone who makes technology that disenshittifies their offerings.

    You can take care to avoid enshittification, you can even make a fetish out of it, but without addressing these systemic failings, your individual actions will only get you so far. Sure, use privacy-enhancing tools like Signal to communicate with other people, but if the only way to get your kid to their little league game is to join the carpool group on Facebook, you’re going to hemorrhage data about everything you do to Meta.

    https://pluralistic.net/2025/07/31/unsatisfying-answers/#systemic-problems






  • One addition: On the right, “A computer from this decade?” — I think this can easily be widened to “A computer older than 15 years?” (swapping “yes” & “no”, of course). The laptop on my lap is exactly 10 years old, wasn’t high end back then, and runs OpenSUSE with KDE easily. Well, in all fairness, it’s now got 12 GB RAM, but that’s mostly for development purposes. Perhaps one could ask directly: “At least 8 GB RAM?”


  • I appreciate that. It’s important to note though that there’s much more wrong with their headline and subtitle than just the word “spam”. They’re trying hard to make it look as if it was really just some single peasant, who didn’t stay in his lane, abusing the powers accorded to him through modern technology, and the policymakers were so foolish as to fall for it:

    One-man spam campaign ravages EU ‘chat control’ bill

    A software developer from Denmark is having an outsized influence on a hotly debated law to break open encrypted apps.

    They seem to think the only person allowed to have an outsized influence is Friede Springer, their owner.