ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • clutch@lemmy.ml
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    The fear is that hospital administrators equipped with their MBA degrees will think about using it to replace expensive, experienced physicians and diagnosticians

    • whoisearth@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      They’ve been trying this shit for decades already with established AI like Big Blue. This isn’t a new pattern. Those in charge need to keep driving costs down and profit up.

      Race to the bottom.

    • Obinice@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      If that were legal, I’d absolutely be worried, you make a good point.

      Even Doctor need special additional qualifications to do things like diagnose illnesses via radiographic imagery, etc. Specialised AI is making good progress in aiding these sorts of things, but a generalised and very poor AI like ChatGPT will never be legally certified to do this sort of thing.

      Once we have a much more effective generalised AI, things will get more interesting. It’ll have to prove itself thoroughly though, before being certified, so it’ll still be a few years after it appears before we see it used in clinical applications.