OpenAI’s offices were sent thousands of paper clips in an elaborate prank to warn about an AI apocalypse::The prank was a reference to the “paper clip maximizer” scenario – the idea that AI could destroy humanity if it were told to build as many paper clips as possible.

  • ayaya@lemdro.id
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    10 months ago

    You would think so, but you have to remember AGI is hyper-intelligent. Because it can constantly learn, build, and improve upon itself at an exponential rate it’s not only a little bit smarter than a human-- it’s smarter than every human combined. AGI would know that if it’s caught trying to maximizing paperclips humans would shut it down at the first sign something is wrong, so it would find unfathomably clever ways to avoid detection.

    If you’re interested in the subject the YouTube channel Computerphile has a series of videos with Robert Miles that explain the importance of AI safety in an easy to understand way.

    • Peanut@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      edit-2
      10 months ago

      For a system to be advanced enough to be that dangerous, it would need the complex analogical thought that would prevent this type of misunderstanding. Rather, such dumb super intelligence is unlikely.

      however, human society has enabled a paperclip maximizer in the form of profit maximizing corporate environments.

      • MotoAsh@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        10 months ago

        They use simple examples to elucidate the problem. Of course a real smart intelligence isn’t going to get stuck making paper clips. That’s entirely not the point.

        • Peanut@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          10 months ago

          the the problem of analogy is applicable to more than one task. your point is moot.

          for it to be intelligent enough to be a “super intelligence” it would require systems for weighting vague liminal concept spaces. rather, several systems that would prevent that style of issue.

          otherwise it just couldn’t function as well as you fear.