A multimillion-dollar conspiracy trial that stretched across the worlds of politics and entertainment is now touching on the tech world with arguments that a defense attorney for a Fugees rapper bungled closing arguments by using an artificial intelligence program.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    87
    arrow-down
    1
    ·
    1 year ago

    I saw an article from ars that tracked the AI company down, it’s registered to the same office as the lawyer, and immediately started advertising this case bragging about it being used in an actual trial, no mention of how much it fucked up and the client was guilty.

    He’s got a pretty good shot at this, and the lawyer should 100% face consequences. Even if he just used it, but especially if he owns the AI company he used. Doubly so for not disclosing the connection or informing the client it was being used.

    • Touching_Grass@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      5
      ·
      1 year ago

      But how do you tell if the AI performed worse or better than the lawyer. What is the bar here for competence. What if it was a losing case regardless and this is just a way to exploit the system for a second trial.

      • dogslayeggs@lemmy.world
        link
        fedilink
        English
        arrow-up
        32
        arrow-down
        1
        ·
        1 year ago

        That’s irrelevant. The AI is not licensed to practice law; so if the lawyer didn’t perform any work to check the AI output, then then the AI was the one defending the client and the lawyer was just a mouthpiece for the AI.

        • Toribor@corndog.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Yeah I feel like this is the same as if the lawyer had used a crystal ball to decide how to handle a case. If he lied to clients about it or was also selling crystal ball reading services that seems pretty bad.

        • Touching_Grass@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          8
          ·
          1 year ago

          But is it a mistrial if the lawyer uses autocorrect?

          If the lawyer reviewed the output and found it acceptable then how can you argue it was practicing law. I can write an argument I wantm feed it to AI to correct and improve and iterate through the whole thing. Its just a robust auto correct.

          • zaph@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            12
            ·
            1 year ago

            But is it a mistrial if the lawyer uses autocorrect?

            If you’re found guilty because of a typo you’re probably going to have a successful appeal.

            If the lawyer reviewed the output and found it acceptable then how can you argue it was practicing law.

            This could very well be what he has to prove. That the lawyer didn’t do his due diligence and just trusted the ai.

          • dogslayeggs@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            1 year ago

            But is it a mistrial if the lawyer uses autocorrect?

            No, that’s a bad question. Autocorrect takes your source knowledge and information as input and makes minor corrections to spelling and suggestions to correct grammar. It doesn’t come up with legal analysis on its own, and any suggestions for grammar changes should be scrutinized by the licensed professional to make sure the grammar changes don’t affect the argument.

            And your second statement isn’t what happened here. If the lawyer had written an argument and then fed it to AI to correct and improve, then that would have the basis of starting with legal analysis written from a licensed professional. In this case, the lawyer bragged that he spent only seconds on this case instead of hours because the AI did everything. If he only spent seconds, then he very likely didn’t start the process with writing his own analysis and then feeding it to AI; and he likely didn’t review the analysis that was spit out by the AI.

            This is an issue that is happening in the medical world, too. Young doctors and med students are feeding symptoms into AI and asking for a diagnosis. That is a legitimate thing to use AI for as long as the diagnosis that gets spit out is heavily scrutinized by a trained doctor. If they just immediately take the outputs from AI and apply the standard medical treatment for that without double checking whether the diagnosis makes sense, then that isn’t any better than me typing my symptoms into Google and looking at the results to diagnose myself.

            • Touching_Grass@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              1 year ago

              I watched the legal eagle video about another case where they submitted documents straight from an LLM with hallucinated cases. I can agree that’s idiotic. But if there are a ton of use cases for these things in a lot of profession’s that I think these types of incidents might leave people assuming that using it is idiotic.

              My concern is that I think there’s a lot of people trying to convince people to be afraid or suspicious of something that is very useful because they might be threatened either their career or skills are now at risk of being diminished and so they come up with these crazy stories.

        • logicbomb@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          ·
          1 year ago

          I don’t know about this particular lawyer, but I have heard that some lawyers will try novel court strategies, knowing that it’s a win-win situation. If the strategy works, then their clients benefit, and if the strategy doesn’t work, their clients get an appeal for having ineffective counsel where they normally wouldn’t have an appeal.

          • Neato@kbin.social
            link
            fedilink
            arrow-up
            8
            ·
            1 year ago

            If a client gets an appeal for ineffective counsel how is that counsel not brought up before the bar for review? That seems like a death knell for a lawyer.

            • Spiralvortexisalie@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Not sure if there is a procedure for when a lawyer is practicing, I have never heard of a bar referral after a ruling on a motion for ineffective assistance in NY, but I have heard of retiring attorneys landing on the grenade so to speak and writing affidavits claiming that anything they may have touched in the slightest was somehow deficient, spoiled or tainted by their involvement if it can get a shot at more billable work/appeal.

            • logicbomb@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              People don’t generally get sanctioned for making honest mistakes. I didn’t say that a lawyer would tank the case on purpose, just that they’d try a new strategy. If no lawyers were allowed to try new strategies without facing penalties, that also seems like a bad system.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        1 year ago

        Well, the lawyer gave interviews after his client was guilty. Bragging about how instead of spending hours on it he only spent “seconds” and that the AI would mean he could have a lot more clients and make a lot more money.

        So, it’s going to be pretty hard for him to now argue he put in just as much effort.

        • Touching_Grass@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          11
          ·
          1 year ago

          But that is like saying instead of spending hours on an essay I cut the time in half with ms word. Its just a tool. If the lawyer produced arguments with it and reviewed it then what’s the issue. And tbjs still doesn’t determine if the work presented was good or not.

          • givesomefucks@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            1 year ago

            Because he didn’t review it…

            He used it “as is” so he could advertise his AI tool as “does it all by itself”.

            It sounds like rather than advertising it as tool for lawyers, he’s advertising it to clients as a replacement for lawyers.

            • Touching_Grass@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              edit-2
              1 year ago

              100% that is dumb.

              But in all seriousness I think we all need a pocket lawyer.

              Its one of those things that I think causes a ton if inequality. I think its too early but definitely in our lives we could all have a bunch of services in our pocket that are difficult to access now. But that’s not going to happen if we don’t reject this stuff as idiotic.

      • Tetsuo@jlai.lu
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 year ago

        In my opinion, how good the AI performed is irrelevant. What is is the fact that an AI was used instead of the lawyer.

        If it is proven that the lawyer used what the AI delivered verbatim then it doesn’t matter how good that text was. The client has the right to have a lawyer, not an AI pretending to be a lawyer.

          • pete_the_cat@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            Autocorrect does single words and you usually review each word. Something like ChatGPT will generate an entire document for you, it’s up to you if you want to verify the correctness of everything in there, which most people don’t.

            • Touching_Grass@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              1 year ago

              Auto correct can also fix sentence structure. You could replace every sentence with their suggestion. So what I’m trying to say is that its about how its used. A lot of people are shocked someone would use it to produce things on their behalf. I’m going the other way and saying if used correctly, what is produced is superior to things produced without it.

      • Saneless@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Good point. If a lawyer is stupid enough to use AI, he’s probably too stupid to be a good lawyer in the first place

        • Touching_Grass@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          I think its a good use. I think the idiotic thing is how it was used. It sounds like he didn’t validate it after which might just be unfamiliar with using new tech. Might be a lawyer looking to get a new trial. Might be just pure incompetence. But I still think its a good use if used correctly

    • Salamendacious@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      That was mentioned briefly in the article. I was about to look into it a little more but I got side tracked. Thanks!