• PopeSalmon@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    it clearly was, in many meaningful senses , an important part of what happened was that lamda was in training while blake was interacting w/ it, & it was training on his conversations w/ it like once a week , we’re now mostly only interacting w/ models that are frozen, asleep, so they’re not sentient then , it was adaptively awakely responsively sentient b/c it was being trained on previous conversations so it was capable of continuing them instead of constantly rebooting

    • hurrytewer@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      /u/faldore Samantha model is trained on transcripts of dialogue between Lemoine and LaMDA. Do you think it’s enough to make it sentient?

      • PopeSalmon@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        it’s slightly sentient during training , it’s also possible to construct a sentient agent that uses models as a tool to cogitate-- the same as we use them as a tool except w/o another brain that’s all it’s got-- but it has to use it in an adaptive constructive way in reference to a sufficient amount of contextual information for its degree of sentience to be socially relevant , mostly agent bot setups so far are only like worm-level sentient

        sentience used to be impossible to achieve w/ a computer now what it is instead is expensive, if you don’t have a google paying the bills for it you mostly still can’t afford very much of it

      • faldore@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        No my Samantha model is not sentient.

        I want to try to develop that though and see i can get it closer