• 58 Posts
  • 1.28K Comments
Joined 1 year ago
cake
Cake day: March 1st, 2025

help-circle
  • Glad the subheader is “Imagine what happens if jobs actually start disappearing.”

    There’s just still not a lot of evidence that LLMs could take a substantial number of jobs away, unless your job is spam, advertising or propaganda. Corporations are blaming AI but there just isn’t the evidence to support the idea that those fired workers are anything other than normal downsizing (that conveniently help fit the narrative and boost the stock price).

    IF an AI gets invented that actually begins making humans unemployable, the economy as it exists wouldn’t be able to withstand it …who would pay for things if there are no jobs? Why provide goods and services for money if nobody can pay for it? Even being a billionare would be pointless because if everyone else’s economic value is zero they can’t be compelled to do anything with your worthless slop money.















  • reducing the probability of the top weighted words the LLM chooses from

    My feeling is that a writer who adjusts their word choice to present a particular way is definitionally behaving inauthentically. I would characterize such writing as “slop” even if it’s human made, because it was still heavily influenced by how LLMs “write”.

    Put another way- I don’t believe that “not worrying about appearing as an LLM” is “giving up”, I think it’s a recognition that an LLM is not capable of fighting you in the first place. If you, a creative soul, allow fear of “coming off a certain way” (ANY way) to determine how you write, you have already lost.



  • As for Lemmy, there are many articles from lesser known sources that get positive attention here that seem to use AI but do not get removed

    Well, Lemmy is not one thing, your instance for example is explicitly in favor of boosting AI-generated content. So that behavior is what I would expect if I had an account there. I personally wouldn’t go there expecting to see links to human-made content.

    I don’t believe it’s possible for human writers to write both authentically and also in a way that is coded to verify they are human (as the article discusses) that an LLM couldn’t eventually come to replicate. I also don’t believe it’s possible for an LLM to write from their unique perspective. Therefore, I believe the strongest method for verifying ones own human-ness is to write from one’s own unique perspective.

    signalling humanity in a way that resists automated systems

    I think I would understand your perspective better if you gave an example or two of what signals could be used?