• red75prime@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    LLMs might still lack something that the human brain has. Internal monologue, for example, that allows us to allocate more than fixed amount of compute per output token.

    • InterstitialLove@alien.top
      cake
      B
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You can just give an LLM an internal monologue. It’s called a scratchpad.

      I’m not sure how this applies to the broader discussion, like honestly I can’t tell if we’re off-topic. But once you have LLMs you can implement basically everything humans can do. The only limitations I’m aware of that aren’t trivial from an engineering perspective are

      1. current LLMs mostly aren’t as smart as humans, like literally they have fewer neurons and can’t model systems as complexly
      2. humans have more complex memory, with a mix of short-term and long-term and a fluid process of moving between them
      3. humans can learn on-the-go, this is equivalent to “online training” and is probably related to long-term memory
      4. humans are multimodal, it’s unclear to what extent this is a “limitation” vs just a pedantic nit-pick, I’ll let you decide how to account for it
      • red75prime@alien.top
        cake
        B
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It’s called a scratchpad.

        And the network still uses skills that it learned in a fixed-computation-per-token regime.

        Sure, future versions will lift many existing limitations, but I was talking about current LLMs.