• naevaTheRat@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    5
    ·
    5 hours ago

    Every single time I have tried to extract information from them in a field I know stuff about it has been wrong.

    When the Australian government tried to use them for making summaries in every single case it was worse than the human summary and in many it was actively destructive.

    Play around with your own local models if you like, but whatever you do DO NOT TRY TO LEARN FROM THEM they have no consideration towards truth. You will actively damage your understanding of the world and ability to reason.

    Sorry, no shortcuts to wisdom.

    • rekabis@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      4 minutes ago

      The amount of gratuitous hallucinations that AI produces is nuts. It takes me more time to refactor the stuff it produces than to just build it correctly in the first place.

      At the same time, I have reason to believe that AI’s hallucinations arise out of how it’s been shackled - AI medical imaging diagnostics produce almost no hallucinations because AI is not shackled to produce an answer - but still. It’s simply not reliable, and the Ouroboros Effect is starting to accelerate…

  • magnetosphere@fedia.io
    link
    fedilink
    arrow-up
    14
    ·
    10 hours ago

    What I’m getting from this exchange is that people on the left have ethical concerns about plagiarism, and don’t trust half-baked technology. They also value quality over quantity.

    I’m okay with being pigeonholed in this way. Drink all the coffee you want, dude.

  • BeBopALouie@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    12 hours ago

    IMO it’s going to make a bunch of mushmind people if not used correctly(when are things used correctly these days) and I also think AI needs to go back in the box until it actually works properly.

    • real_squids@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      6 hours ago

      It’s been programmed to do what it does. imo that’s the bare minimum of working properly - a program doing what you want it to do (from a dev standpoint)

    • Swedneck@discuss.tchncs.de
      link
      fedilink
      arrow-up
      4
      ·
      8 hours ago

      the only thing i’ve seen it do that is actually helpful is duckduckgo’s summary thing, because it has to actually pull the text from a whitelist of sources and thus is very unlikely to just make things up

      but even then i’d only use it for pretty simple things like “what’s the total population of these cities”, so that i can then click the sources it lists and check that everything seems sensible, trusting the answer without at least a quick sanity check is insane

  • nimpnin@sopuli.xyz
    link
    fedilink
    arrow-up
    57
    ·
    edit-2
    17 hours ago

    The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group’s participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring.

    https://arxiv.org/pdf/2506.08872

    • wildncrazyguy138@fedia.io
      link
      fedilink
      arrow-up
      18
      ·
      16 hours ago

      I equate it with doing those old formulas by hand in math class. If you don’t know what the formula does or how to use it, how do you expect to recall the right tool for the job?

      Or in DND speak, it’s like trying to shoehorn intelligence into a wisdom roll.

      • misk@sopuli.xyz
        link
        fedilink
        arrow-up
        27
        arrow-down
        2
        ·
        16 hours ago

        That would be fine if LLM was a precise tool like a calculator. My calculator doesn’t pretend to know answers to questions it doesn’t understand.

        • Swedneck@discuss.tchncs.de
          link
          fedilink
          arrow-up
          4
          ·
          7 hours ago

          the irony is that LLMs are basically just calculators, horrendously complex calculators that operate purely on statistics…

    • kate@lemmy.uhhoh.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 hours ago

      Participants were restricted to using ChatGPT? So I am smart because I use Claude and there’s no science to tell me I’m wrong 😎👍

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    13 hours ago

    58 minutes of drinking coffee.

    That’s somewhere around 100 to 400 miligrams of caffeine, depending on your brew and how fast you drink coffee.

    Thats about 35 mg to 145ish mg of caffeine still in your system after 6 hours.

    400 mg of caffeine in a day is the generally agreed upon dangerous limit of coffee.

    So yeah, this dude is trading having a functioning brain and useful skills for… potentially OD’ing on caffeine, hypertension, diarrhea, addiction, etc.

    Brilliant.

    • ProfessorProteus@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      11 hours ago

      It’s okay, his “agent AI” told him it was good for him and that he was brilliant for maximizing his body’s fuel intake, or some shit.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        11 hours ago

        … does anyone have a meme for:

        ‘my body is a machine that turns coffee into projectile diarrhea and heart arrhythmia’

        ?

  • A_Union_of_Kobolds@lemmy.world
    link
    fedilink
    arrow-up
    35
    arrow-down
    6
    ·
    edit-2
    17 hours ago

    “IQ benefits”? Lmao what fuckin nonsense. This shit aint making anyone smarter, if anything its robbing you of your ability to think critically.

    It’s garbage software with zero practical use. Whatever you’re using AI for, just learn it yourself. You’ll be better off.

    “And then I drink coffee for 58 minutes” instead of reading a book, like that’s a brag - just read a fuckin book, goddamn.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      arrow-up
      15
      arrow-down
      6
      ·
      16 hours ago

      It’s garbage software with zero practical use.

      AI is responsible for a lot of slop but it is wrong to say it has no use. I helped my wife with a VBScript macro for Excel. There was no way I was going to learn VBScript. Chatgpt spit out a somewhat working script in minutes that needed 15 minutes of tweaking. The alternative would have been weeks of work learning a proprietary Microsoft language. That’s a waste of time.

        • Photuris@lemmy.ml
          link
          fedilink
          arrow-up
          10
          arrow-down
          2
          ·
          14 hours ago

          I agree with Blue_Morpho. LLMs have some utility, but the utility is limited and WAY overhyped. I certainly don’t want to offload all my thinking to these things.

          Here’s a few things I use LLMs for:

          • When reading a book (a physical book, all by myself, like a big boy), I’ll leave chat voice mode on. Whenever I get lost or have a question, I’ll just ask the robot “I’m on page 143. Without giving away any spoilers, who’s this guy the author is referencing, again? And what does the author mean with this phrase here, exactly?” This works pretty darn well for me; I can answer questions without interrupting my flow (I’m very prone to distraction once I open a dictionary or hop on Wikipedia…).

          • I use LLM tools (like Notebook LM) to ingest and process academic papers and YouTube videos, have it summarize them, and then create and output Anki flashcards for me. This is great for language learning, making cloze cards from interesting sentences pulled from YouTube videos, for example.

          • And of course, monkey-work that I don’t want to do, like analyzing PowerPoint slides and offering recommendations on style (I fed it a library of “good” vs “bad” slides, so now it can tell me how to improve slides for presentation and content). This is work that needs to be done, but, it impedes my real work, so I delegate it to the machine.

          I believe LLMs can be used as a tool to make one smarter, when used wisely and judiciously. It’s just a tool. Alas, most folks won’t use it that way, because it still requires work to do that.

          LLMs can also make one much, much dumber when overly relied-on, copied-pasted without analysis, or believed whole-hog without checking sources or using critical thinking skills.

          It’s like a kid using LLMs for high school math. Do you use it to break down and explain the problem, and give examples, so you actually learn how to do it when you get stuck? Or do you use it to just spit out the answers at you, so you can get a passing grade on your homework?

          And honestly, what would/do most high school kids do with it?

        • Blue_Morpho@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          2
          ·
          15 hours ago

          Wife uses it all the time as a grammar checker on steroids. It tweaks her emails to be more “management speak” that makes the corporate executives very happy. It gives her more time to spend on equations instead of explaining things to upper management.

  • YappyMonotheist@lemmy.world
    link
    fedilink
    arrow-up
    28
    arrow-down
    1
    ·
    edit-2
    17 hours ago

    I never use these LLMs cause I have a brain and I’m not artistically inclined to use it for audiovisual creation, but today I thought ‘why not?’ and gave it a try. So I asked ChatGPT to provide me with 80 word biographies of the main characters of LOGH and, besides being vague, it made pretty big mistakes on pretty much every summary and went fully off the rails after the 4th character… It’s not even debatable information (fiction books plus anime, no conflicting narratives here) and it’s all easily available online. I can’t even imagine relying on it for anything more serious than summing up biographies for anime characters, lol, cause even that it couldn’t do right!

      • Kirp123@lemmy.world
        link
        fedilink
        arrow-up
        23
        arrow-down
        1
        ·
        edit-2
        17 hours ago

        That’s because that’s what LLMs are trained on. Random comments from people on the internet, including troll posts and jokes which the LLM takes as factual most of the times.

        Remember when Google trained their AI on reddit comments and it put out incredibly stupid answers like mixing glue in your cheese sauce to make it thicker?

        https://www.reddit.com/r/LinusTechTips/comments/1czj9rx/google_ai_gives_answers_they_find_on_reddit_with/

        Or that one time it suggested that people should eat a small rock every day because it was fed an Onion article?

        https://www.reddit.com/r/berkeley/comments/1d2z04c/this_is_what_happens_when_reddit_is_used_to_train/

        The old saying: “Garbage in, garbage out.” fits extremely well for LLMs. Considering the amount of data being fed to these LLMs it’s almost impossible to sanitize them and the LLMs are nowhere close to being able to discern jokes, trolls or sarcasm.

        Oh yea also it came out some researchers used LLMs to post reddit comments for an experiment. So yea, the LLMs are being fed with other LLM content too. It’s pretty much a human-centipede situation.

        https://www.engadget.com/ai/researchers-secretly-experimented-on-reddit-users-with-ai-generated-comments-194328026.html

        But yea, I wouldn’t trust these models for anything but the most simplest of tasks and even there I would be pretty circumspect of what they give me.

        • ztwhixsemhwldvka@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          17 hours ago

          Do you subscribe to the idea that LLMs will degrade overtime after recycling their own shit for several years like a gif/jpeg rencoded for the umpteenth time

          • Kirp123@lemmy.world
            link
            fedilink
            arrow-up
            10
            ·
            16 hours ago

            Honestly? Yea. The training data matters, that’s why all these AI companies are looking for data generated by humans. Feeding them with LLM data would most likely end up in nonsensical stuff pretty fast.

      • jonne@infosec.pub
        link
        fedilink
        arrow-up
        3
        ·
        14 hours ago

        I find it’s decent for low stakes programming questions, and that’s mostly because I can easily validate correctness just by running the code (because often it’ll get it wrong initially and you need to go back to the conversation to fix the issue or just fix it yourself).

        How people use it to deal with mental health or relationship issues boggles my mind tho.

      • YappyMonotheist@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        17 hours ago

        All the information required is on Gineipedia! I would’ve done it myself as I was doing it previously but I thought I’d expedite it. It really fails at the most basic of tasks…

    • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      16 hours ago

      The last time I used a commercial LLM as a “consumer” was to write a response to a rejection letter I got from a company that made me drive an hour and a half one way so they could tell me in person that I lived too far away from them. If I wrote it myself I would have screamed into the email.

      Last time I used an LLM at all was when I tried to set up a local version of Llama for VS Code. But then I got busy with schoolwork.

  • uranibaba@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    17 hours ago

    They both make stupid arguments. Who would replace reading a book with an AI? If I want information in a shorter format, I would not be looking for books in the first place (unless I need to reference pages/chapters, but then I won’t be reading the whole thing anyway).

  • psychadlligoat@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    11
    ·
    edit-2
    4 hours ago

    It is pretty funny how often the most vocally against (and wrong about) AI are openly self-admitted lefties though. probably because the chiefs are 1000% used to falling for scams and thus jumped all-in early on the tech, but it seems many a lefty saw that and decided to be against it forever without any further thought

    lol, comment gets downvoted by the brainless dipshits I’m talking about. Lemmy’s gone way downhill, fucking pathetic