• xthexder@l.sw0.com
    link
    fedilink
    English
    arrow-up
    19
    ·
    6 months ago

    Because hallucinations pretty much exactly describes what’s happening? All of your suggested terms are less descriptive of what the issue is.

    The definition of hallucination:

    A hallucination is a perception in the absence of an external stimulus.

    In the case of generative AI, it’s generating output that doesn’t match it’s training data “stimulus”. Or in other words, false statements, or “facts” that don’t exist in reality.

    • ALostInquirer@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      6 months ago

      perception

      This is the problem I take with this, there’s no perception in this software. It’s faulty, misapplied software when one tries to employ it for generating reliable, factual summaries and responses.

      • xthexder@l.sw0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        6 months ago

        I have adopted the philosophy that human brains might not be as special as we’ve thought, and that the untrained behavior emerging from LLMs and image generators is so similar to human behaviors that I can’t help but think of it as an underdeveloped and handicapped mind.

        I hypothesis that a human brain, who’s only perception of the world is the training data force fed to it by a computer, would have all the same problems the LLMs do right now.

        To put it another way… The line that determines what is sentient and not is getting blurrier and blurrier. LLMs have surpassed the Turing test a few years ago. We’re simulating the level of intelligence of a small animal today.