As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

  • kalkulat@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Trouble is that ‘quick answers’ mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.

    When you need the details to be verified by trustworthy sources, it’s still do-it-yourself time. If you -don’t- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.

    A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer … ‘none’ … answered as if it had no doubt. It was -so- wrong it hadn’t even tried. I pointed it to the right answer (‘an infinite number’) and to the proof. It then verified that.

    A couple of days ago, I asked it the same question … and it was completely wrong again. It hadn’t learned a thing. After some conversation, it told me it couldn’t learn. I’d already figured that out.

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      Trouble is that ‘quick answers’ mean the LLM took no time to do a thorough search.

      LLMs don’t “search”. They essentially provide weighted parrot-answers based on what they’ve seen elsewhere.

      If you tell an LLM that the sky is red, they will tell you the sky is red. If you tell them your eyes are the colour of the sky, they will repeat that your eyes are red. LLMs aren’t capable of checking if something is true.

      Theyre just really fast parrots with a big vocabulary. And every time they squawk, it burns a tree.

    • chaosCruiser@futurology.todayOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Math problems are a unique challenge for LLMs, often resulting in bizarre mistakes. While an LLM can look up formulas and constants, it usually struggles with applying them correctly. Sort of, like counting the hours in a week, it says it calculates 7*24, which looks good, but somehow the answer is still 10 🤯. Like, WTF? How did that happen? In reality, that specific problem might not be that hard, but the same phenomenon can still be seen in more complicated problems. I could give some other examples too, but this post is long enough as it is.

      For reliable results in math-related queries, I find it best to ask the LLM for formulas and values, then perform the calculations myself. The LLM can typically look up information reasonably accurately but will mess up the application. Just use the right tool for the right job, and you’ll be ok.

    • Wolf314159@startrek.website
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      Is your abuse of the ellipsis and dashes supposed to be ironic? Isn’t that a LLM tell?

      I’m not even sure what the (‘phrase’) construct is even meant to imply, but it’s wild. Your abuse of punctuation in general feels like a machine trying to convince us it’s human or a machine transcribing a human’s stream of consciousness.