• 0 Posts
  • 6 Comments
Joined 11 months ago
cake
Cake day: October 27th, 2023

help-circle




  • However, pages 6 and 13 describe tricks for making loops:

    One way to bypass this limitation is to exploit the autoregressive inference procedure. Since the model is called iteratively at inference time, this effectively provides an “outer-loop” that can enable a certain kind of sequential computation, where the sequential state is encoded into the prior context. This is exactly what scratchpads enable.

    The RASP conjecture provides a natural way to understand why scratchpads (Nye et al., 2021; Wei et al., 2022) can be helpful: scratchpads can simplify the next-token prediction task, making it amenable to a short RASP-L program. One especially common type of simplification is when a scratchpad is used to “unroll” a loop, turning a next-token problem that requires n sequential steps into n next-token problems that are each only one step. The intuition here is that Transformers can only update their internal state in very restricted ways —given by the structure of attention—but they can update their external state (i.e. context) in much more powerful ways. This helps explain why parity does not have a RASP-L program, but addition with index hints does. Both tasks require some form of sequential iteration, but in the case of addition, the iteration’s state is external: it can be decoded from the input context itself.