• SuddenDragonfly8125@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Yknow, at the time I figured this guy, with his background and experience, would be able to distinguish normal from abnormal LLM behavior.

    But with the way many people treat GPT3.5/GPT4, I think I’ve changed my mind. People can know exactly what it is (i.e. a computer program) and still be fooled by its responses.

    • scubawankenobi@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      exactly what it is (i.e. a computer program)

      I get what you mean, but I believe it’s more productive not lumping a neural network (inference model), with much of the “logic” coming from automated/self-training, into being “just a computer program”. There’s historical context & understanding of a “program” where a human actually designs & knows what IF-THEN-ELSE type of logic is executed… understanding it will do what it is ‘programmed’ to do. NN inference is modeled after (& named after) the human brain (weighted neurons) and there is both a lack of understanding all (most!) of the logic (‘program’) that is executing under-the-hood, as they say.

      Note: I’m not at all saying that GPT 3.5/4 are sentient, but rather that it’s missing a lot of the nuance, as well as complexity, of LLMs by referring to them as simply being “just a computer program”.

    • PopeSalmon@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      it’s dismissive & rude for you to call it “fooled” that he came to a different conclusion than you about a subtle philosophical question

    • Captain_Pumpkinhead@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      If you ever wonder if the machine is sentient, ask it to write code for something somewhat obscure.

      I’m trying to run a Docker container in NixOS. NixOS is a Linux distro known for being super resilient (I break stuff a lot because I don’t know what in doing), and while it’s not some no-name distro, it’s also not that popular. GPT 4 Turbo has given me wrong answer after wrong answer and it’s infuriating. Bard too.

      If this thing was sentient, it’d be a lot better at this stuff. Or at least be able to say, “I don’t know, but I can help you figure it out”.

        • nagareteku@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Are we? Do we have free will or are our brains are just deterministic models with 100T parameters as mostly untrained synapses?

        • Captain_Pumpkinhead@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I’m more talking about hallucinations. There’s a difference between “I’m not sure”, “I think it’s this but I’m confidently wrong”, and “I’m making up bullshit answers left and right”.

      • Feisty-Patient-7566@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I think this is a huge problem with current AIs is that they are forced to generate an output, particularly in a very strict time constraint. “I don’t know” should be a valid answer.