• naevaTheRat@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    8
    ·
    11 hours ago

    Every single time I have tried to extract information from them in a field I know stuff about it has been wrong.

    When the Australian government tried to use them for making summaries in every single case it was worse than the human summary and in many it was actively destructive.

    Play around with your own local models if you like, but whatever you do DO NOT TRY TO LEARN FROM THEM they have no consideration towards truth. You will actively damage your understanding of the world and ability to reason.

    Sorry, no shortcuts to wisdom.

    • rekabis@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      6 hours ago

      The amount of gratuitous hallucinations that AI produces is nuts. It takes me more time to refactor the stuff it produces than to just build it correctly in the first place.

      At the same time, I have reason to believe that AI’s hallucinations arise out of how it’s been shackled - AI medical imaging diagnostics produce almost no hallucinations because AI is not shackled to produce an answer - but still. It’s simply not reliable, and the Ouroboros Effect is starting to accelerate…

      • naevaTheRat@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        4
        ·
        5 hours ago

        It’s not “shackled” they are completely different technologies.

        Imaging diagnosis assistance it something like computer vision -> feature extraction -> some sort of classifier

        Don’t be tricked by the magical marketing term AI. That’s like assuming that a tick tac toe algorithm is the same thing as a spam filter because they’re both “AI”.

        Also medical imaging stuff makes heaps of errors or extracts insane features like the style of machine used to image. They’re getting better but image analysis is a relatively tractable problem.