• this@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    5 months ago

    Yea, let’s just slap the missile equivalent of chatgpt on a bunch of drone missiles, what could go wrong? /s

    Serriously though, what happens if the AI driving the drone hallucinates? I wouldn’t want to be anywhere near these things when they’re testing them.

    • mozz@mbin.grits.devOP
      link
      fedilink
      arrow-up
      14
      ·
      5 months ago

      I highly doubt they are putting LLMs on their little throwaway drones. The US military has actually been working on “let’s figure out what that thing is and blow it up automatically” technology since at least as far back as the 90s; e.g. modern warship defense systems use it to be able to react faster than a human can to blow up an incoming missile.

      Personally I am much more worried about it working exactly as intended.

      • disguy_ovahea@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        You are correct. Large language models like ChatGPT are a subset of deep learning, which is a subset of machine learning. Common examples of simple machine learning software are facial recognition, social media algorithms, speech-to-text, and predictive text.

        There is no reason to include software as complex, resource intensive, or experimental as an LLM.

      • disguy_ovahea@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        5 months ago

        You are correct. Large language models like ChatGPT are a subset of deep learning, which is a subset of machine learning. Common examples of simple machine learning software are facial recognition, social media algorithms, speech-to-text, and predictive text.

        There is no reason to include software as complex, resource intensive, or experimental as an LLM when dedicated ML will suffice.