• rekabis@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    6 hours ago

    The amount of gratuitous hallucinations that AI produces is nuts. It takes me more time to refactor the stuff it produces than to just build it correctly in the first place.

    At the same time, I have reason to believe that AI’s hallucinations arise out of how it’s been shackled - AI medical imaging diagnostics produce almost no hallucinations because AI is not shackled to produce an answer - but still. It’s simply not reliable, and the Ouroboros Effect is starting to accelerate…

    • naevaTheRat@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      4
      ·
      4 hours ago

      It’s not “shackled” they are completely different technologies.

      Imaging diagnosis assistance it something like computer vision -> feature extraction -> some sort of classifier

      Don’t be tricked by the magical marketing term AI. That’s like assuming that a tick tac toe algorithm is the same thing as a spam filter because they’re both “AI”.

      Also medical imaging stuff makes heaps of errors or extracts insane features like the style of machine used to image. They’re getting better but image analysis is a relatively tractable problem.