• Megaman_EXE@beehaw.org
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    1 day ago

    I find this funny because the police have been doing this with civilians. My main concern is that this tech is not 100% accurate. I feel like it shouldn’t be used on its own.

    I guess if it is used as a supplementary tool and not the main piece of evidence, it could maybe be okay? But I would be scared it would target an innocent individual, which could cause very negative or dangerous consequences. The main thing would be accuracy. I don’t know if it was addressed as part of the article is paywalled.

    Edit: I think anything that forces those in power to take accountability for their actions is great though. More tools should be in place to prevent abuse of power

    • BurningRiver@beehaw.org
      link
      fedilink
      arrow-up
      8
      ·
      19 hours ago

      I find this funny because this

      My main concern is that this tech is not 100% accurate. I feel like it shouldn’t be used on its own.

      Is generally the least of their concerns.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      5
      ·
      21 hours ago

      That’s the nuance of AI that anyone who has done any actual work with ML has known for decades now. ML is amazing. It’s not perfect. It’s actually pretty far from perfect. So you should never ever use it as a solo check, but it can be great for a double check.

      Such as with cancer. AI can be a wonderful choice to detecting a melanoma, if used correctly. Such as:

      • a doctor has already cleared a mole, but if you want to know if it warrants a second opinion by another doctor. You could have the model to have a confidence of say, 80% sure that the first doctor is correct in that it is fine.

      • if you do not have access to a doctor immediately, it can be a fine check, again only to a certain percentage. Say that in this case in the future you are worried but cannot access a doctor easily. A patient could snap a photo and in this case a very high confidence rating would say that it is probably fine, with a disclaimer that it is just an AI model and if it changes or you are still worried, get it checked.

      Unfortunately, all of that nuance in that it is all just probabilities is completely lost on both the creators of all of these AI tools, and the risks are not actually passed to the users so blind trust is the number one problem.

      We see it here with police too. “It said it’s them”. No, it only said to a specific confidence that it might be them. That’s a very different thing. You should never use it to find someone, only to verify someone.

      I actually really like how airport security implemented it because it’s actually using it well. Here’s an ID, it has a photo of a person. Compare it to the photo taken there in person, and it should verify to a very high confidence that they are the same person. If in doubt, there’s a human there to also verify it. That’s good ML usage.

    • Powderhorn@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      I’m miles away from AI, so this may be me talking out of my ass, but shouldn’t a smaller database (thousands) be more accurate than anything orders of magnitude larger?