Paper. I am not affiliated with this paper or its authors.

Abstract:

Large language models exhibit surprising emergent generalization properties, yet also struggle on many simple reasoning tasks such as arithmetic and parity. This raises the question of if and when Transformer models can learn the true algorithm for solving a task. We study the scope of Transformers’ abilities in the specific setting of length generalization on algorithmic tasks. Here, we propose a unifying framework to understand when and how Transformers can exhibit strong length generalization on a given task. Specifically, we leverage RASP (Weiss et al., 2021) – a programming language designed for the computational model of a Transformer – and introduce the RASP-Generalization Conjecture: Transformers tend to length generalize on a task if the task can be solved by a short RASP program which works for all input lengths. This simple conjecture remarkably captures most known instances of length generalization on algorithmic tasks. Moreover, we leverage our insights to drastically improve generalization performance on traditionally hard tasks (such as parity and addition). On the theoretical side, we give a simple example where the “min-degree-interpolator” model of learning from Abbe et al. (2023) does not correctly predict Transformers’ out-of-distribution behavior, but our conjecture does. Overall, our work provides a novel perspective on the mechanisms of compositional generalization and the algorithmic capabilities of Transformers.

Twitter thread from one of the work’s authors.

  • BrotherGraham@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Page 6 of the paper https://arxiv.org/pdf/2310.16028.pdf has the intuition behind the RASP programming language:

    A key characteristic of RASP is that it only allows parallelizable operations, because
    Transformers are an inherently parallel model of computation. This makes performing inherently-sequential computation, such as iterating through each input symbol and updating an internal state, tricky if not impossible to write in RASP. This is why loops are not allowed in RASP: a Transformer has only constant depth, and cannot directly simulate an arbitrary number of loop iterations.

    • BrotherGraham@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      However, pages 6 and 13 describe tricks for making loops:

      One way to bypass this limitation is to exploit the autoregressive inference procedure. Since the model is called iteratively at inference time, this effectively provides an “outer-loop” that can enable a certain kind of sequential computation, where the sequential state is encoded into the prior context. This is exactly what scratchpads enable.

      The RASP conjecture provides a natural way to understand why scratchpads (Nye et al., 2021; Wei et al., 2022) can be helpful: scratchpads can simplify the next-token prediction task, making it amenable to a short RASP-L program. One especially common type of simplification is when a scratchpad is used to “unroll” a loop, turning a next-token problem that requires n sequential steps into n next-token problems that are each only one step. The intuition here is that Transformers can only update their internal state in very restricted ways —given by the structure of attention—but they can update their external state (i.e. context) in much more powerful ways. This helps explain why parity does not have a RASP-L program, but addition with index hints does. Both tasks require some form of sequential iteration, but in the case of addition, the iteration’s state is external: it can be decoded from the input context itself.