ChatGPT has a style over substance trick that seems to dupe people into thinking it’s smart, researchers found::Developers often prefer ChatGPT’s responses about code to those submitted by humans, despite the bot frequently being wrong, researchers found.

  • sj_zero@lotide.fbxl.net
    link
    fedilink
    arrow-up
    12
    arrow-down
    3
    ·
    1 year ago

    Anyone who has actually needed a correct answer to a question realized this a long time ago.

    The problem is that most people don’t bother checking the answers.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      If you need a correct answer, you’re doing it wrong!

      I’m joking of course, but there’s a seed of truth: I’ve found ChatGPT’s wrong or incomplete answers to be incredibly helpful as a starting point. Sometimes it will suggest a Python module I didn’t even know about that does half my work for me. Or sometimes it has a lot of nonsense but the one line I actually need is correct (or close enough for me to understand).

      Nobody should be copying code off Stack Overflow without understanding it, either.

    • sumofchemicals@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This hasn’t been my experience. Yes, chatgpt gets stuff wrong, and fairly regularly. But I can ask it my question directly, and can include sample code, and I get an answer immediately. Anyone going on stack overflow has to either google around and sift through answers for relevance, or has to post the question and wait for someone to respond.

      With either chatgpt or stack you have to check the answer to make sure it works - that’s how coding goes. But one I know if it works or not pretty much immediately with fairly low investment of time and effort. And if it doesn’t, I just rephrase the question, or literally say “that doesn’t seem to work, now I’m getting this error: $error”

      • sj_zero@lotide.fbxl.net
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        When it gets stuff wrong though, it doesn’t just get stuff wrong, it gets stuff completely made up. I’ve seen it create entire apis, I’ve seen it generate legal citations out of whole cloth and entire laws that don’t exist. I’ve seen it very confidently tell me to write a command that clearly doesn’t work and if it did then I wouldn’t be asking a question.

        But I don’t think that the alternative to chat GPT would even be stackoverflow, it would be an expert. Given the choice between the two, you would definitely want an expert every time.

        • sumofchemicals@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          You’re right that it completely fabricates stuff. And even with that reality, it improves my productivity, because I can take multiple swings and still be faster than googling. (And sometimes might just not find an answer googling)

          Of course you’ve got to know that’s how the tool works, and some people are hyping it and acting like it’s useful in all situations. And there are scenarios where I don’t know enough about the subject to begin with to ask the right question or realize how incorrect the answer it’s giving is.

          I only commented because you said you can’t get the correct answer, and that people don’t check the answer, both of which I know from my and my friends actual usage is not the case.

      • the_medium_kahuna@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        1 year ago

        But the fact is that you need to check every time to be sure it isn’t the rare inaccuracy. Even if it could cite sources, how would you know it was interpreting the source’s statements accurately?

        imo, it’s useful for outlining and getting ideas flowing, but anything beyond that high level, the utility falls off pretty quickly

        • DreamButt@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Ya it’s great for exploring options. Anything that’s raw textual is good enough to give you a general idea. And moreoftenthannot it will catch a mistake about the explanation if you ask for a clarification. But actual code? Nah, it’s about a 50/50 if it gets it right the first time and even then the style is never to my liking

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 year ago

    This was the first thing I’ve noticed on day one. The way it “speaks” is designed to sound like a polite authority in the field.

  • neptune@dmv.social
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    When you underpay a bunch of gig workers to rate the outputs? Obviously it’s going to write in a manner that best BS’s a layperson.

    Would be too expensive to hire experts in every field to train the AI to actually do good work. Imagine paying software engineers 100k plus benefits to vote on its code outputs, or getting Miss Manners to comment on its etiquette suggestions.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    8
    ·
    1 year ago

    It’s like crypto, or really any other con job.

    It makes idiots feel smart.

    Make a mark feel like they’re smart, and they’ll become attached to the idea and defend it to their death. Because the alternative is they aren’t really smart and fell for a scam.

    When smart people try to explain that to the idiots, it just makes them defend the scam even harder.

    Try to tell people chatgpt isn’t great, and they just ramble on about some nonsensical stuff they don’t even understand themselves and then claim anyone that disagrees just isn’t smart enough to get it.

    It’s a great business plan if you have zero morals, which is why the method never really goes away, just moves to another product.

    • sumofchemicals@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I have seen someone type “tell me how make a million dollar business” into chatgpt. Of course that’s not going to work. But LLMs have immediate obvious value that crypto does not, and I think making the comparison reveals a lack of experience with those useful applications. I’m using chatgpt nearly every day as a tool to help with coding. It’s not a replacement for a person, but it is like giving a person a forklift.

  • BellaDonna@mujico.org
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    1 year ago

    It’s the same way that people are convinced I’m way smarter than I actually am, it’s the way I construct sentences and respond, the words I choose, not to much the substance and verity of.

  • daellat@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Certainly it’s gotten worse as we’ve all seen the news probably. When gpt4 came to the API it was impressive at times. A caveat always remained: don’t blindly trust it, but that goes for stack overflow replies too.

    Ohh cool, a downvote and smug reply. Go back to reddit or something.

    Lol https://mastodon.social/@rodhilton/110894818243613681

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I’ve seen that in the news. I haven’t experienced it at all. In fact I’m getting far better results now than I ever did before, though I suspect that’s mostly on me - experience using almost any tool will improve the output.