• Initiateofthevoid@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      2
      ·
      3 days ago

      The idea of AI accounting is so fucking funny to me. The problem is right in the name. They account for stuff. Accountants account for where stuff came from and where stuff went.

      Machine learning algorithms are black boxes that can’t show their work. They can absolutely do things like detect fraud and waste by detecting abnormalities in the data, but they absolutely can’t do things like prove an absence of fraud and waste.

      • futatorius@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        LLMs often use bizarre “reasoning” to come up with their responses. And if asked to explain those responses, they then use equally bizarre “reasoning.” That’s because the explanation is just another post-hoc response.

        Unless explainability is built in, it is impossible to validate an LLM.

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        3 days ago

        For usage like that you’d wire an LLM into a tool use workflow with whatever accounting software you have. The LLM would make queries to the rigid, non-hallucinating accounting system.

        I still don’t think it would be anywhere close to a good idea because you’d need a lot of safeguards and also fuck your accounting and you’ll have some unpleasant meetings with the local equivalent of the IRS.

        • pinball_wizard@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          The LLM would make queries to the rigid, non-hallucinating accounting system.

          And then sometimes adds a halucination before returning an answer - particularly when it encournters anything it wasn’t trained on, like important moments when business leaders should be taking a closer look.

          There’s not enough popcorn in the world for the shitshow that is coming.

          • vivendi@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            2 days ago

            You’re misunderstanding tool use, the LLM only queries something to be done then the actual system returns the result. You can also summarize the result or something but hallucinations in that workload are remarkably low (however without tuning they can drop important information from the response)

            The place where it can hallucinate is generating steps for your natural language query, or the entry stage. That’s why you need to safeguard like your ass depends on it. (Which it does, if your boss is stupid enough)

            • pinball_wizard@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 day ago

              I’m quite aware that it’s less likely to technically hallucinate in these cases. But focusing on that technicality doesn’t serve users well.

              These (interesting and useful) use cases do not address the core issue that the query was written by the LLM, without expert oversight, which still leads to situations that are effectively halucinations.

              Technically, it is returning a “correct” direct answer to a question that no rational actor would ever have asked.

              But when a halucinated (correct looking but deeply flawed) query is sent to the system of record, it’s most honest to still call the results a halucination, as well. Even though they are technically real data, just astonishingly poorly chosen real data.

              The meaningless, correct-looking and wrong result for the end user is still just going to be called a halucination, by common folks.

              For common usage, it’s important not to promise end users that these scenarios are free of halucination.

              You and I understand that technically, they’re not getting back a halucination, just an answer to a bad question.

              But for the end user to understand how to use the tool safely, they still need to know that a meaningless correct looking and wrong answer is still possible (and today, still also likely).

        • futatorius@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          The LLM would make queries to the rigid, non-hallucinating accounting system.

          ERP systems already do that, just not using AI.

    • Korhaka@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 days ago

      How easy will it be to fool the AI into getting the company in legal trouble? Oh well.

    • vivendi@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      3 days ago

      This is because auto regressive LLMs work on high level “Tokens”. There are LLM experiments which can access byte information, to correctly answer such questions.

      Also, they don’t want to support you omegalul do you really think call centers are hired to give a fuck about you? this is intentional

      • Repple (she/her)@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        I don’t think that’s the full explanation though, because there are examples of models that will correctly spell out the word first (ie, it knows the component letter tokens) and still miscount the letters after doing so.

        • vivendi@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          3 days ago

          No, this literally is the explanation. The model understands the concept of “Strawberry”, It can output from the model (and that itself is very complicated) in English as Strawberry, jn Persian as توت فرنگی and so on.

          But the model does not understand how many Rs exist in Strawberry or how many ت exist in توت فرنگی

          • Repple (she/her)@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            3 days ago

            I’m talking about models printing out the component letters first not just printing out the full word. As in “S - T - R - A - W - B - E - R - R - Y” then getting the answer wrong. You’re absolutely right that it reads in words at a time encoded to vectors, but if it’s holding a relationship from that coding to the component spelling, which it seems it must be given it is outputting the letters individually, then something else is wrong. I’m not saying all models fail this way, and I’m sure many fail in exactly the way you describe, but I have seen this failure mode (which is what I was trying to describe) and in that case an alternate explanation would be necessary.

            • vivendi@programming.dev
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              3 days ago

              The model ISN’T outputing the letters individually, binary models (as I mentioned) do; not transformers.

              The model output is more like Strawberry <S-T-R><A-W-B>

              <S-T-R-A-W-B><E-R-R>

              <S-T-R-A-W-B-E-R-R-Y>

              Tokens can be a letter, part of a word, any single lexeme, any word, or even multiple words (“let be”)

              Okay I did a shit job demonstrating the time axis. The model doesn’t know the underlying letters of the previous tokens and this processes is going forward in time