Highlights: The White House issued draft rules today that would require federal agencies to evaluate and constantly monitor algorithms used in health care, law enforcement, and housing for potential discrimination or other harmful effects on human rights.

Once in effect, the rules could force changes in US government activity dependent on AI, such as the FBI’s use of face recognition technology, which has been criticized for not taking steps called for by Congress to protect civil liberties. The new rules would require government agencies to assess existing algorithms by August 2024 and stop using any that don’t comply.

  • Cris@lemmy.world
    link
    fedilink
    English
    arrow-up
    106
    arrow-down
    1
    ·
    1 year ago

    I mean that broadly seems like a good thing. Execution is important, but on paper this seems like the kind of forward thinking policy we need

    • pandacoder@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      4
      ·
      1 year ago

      Quite frankly it didn’t put enough restrictions on the various “national security” agencies, and so while it may help to stem the tide of irresponsible usage by many of the lesser-impact agencies, it doesn’t do the same for the agencies that we know will be the worst offenders (and have been the worst offenders).