• koi691337@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Then you could have the language model generate imagined user responses and optimize the reward signal on the imagined user responses

    Wouldn’t this just constitute to the model sort of overfitting to noise?

    • til_life_do_us_part@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s a risk if your model can’t accurately predict user responses, but I don’t see how it’s a necessary characteristic of the approach. If so the same issue would apply to model based RL in general no? Unless you are suggesting something special about language modelling or user responses which makes it fundamentally hard to learn a model of.