OpenAI’s offices were sent thousands of paper clips in an elaborate prank to warn about an AI apocalypse::The prank was a reference to the “paper clip maximizer” scenario – the idea that AI could destroy humanity if it were told to build as many paper clips as possible.

  • MotoAsh@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    1 year ago

    They use simple examples to elucidate the problem. Of course a real smart intelligence isn’t going to get stuck making paper clips. That’s entirely not the point.

    • Peanut@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      the the problem of analogy is applicable to more than one task. your point is moot.

      for it to be intelligent enough to be a “super intelligence” it would require systems for weighting vague liminal concept spaces. rather, several systems that would prevent that style of issue.

      otherwise it just couldn’t function as well as you fear.