So RWKV 7b v5 is 60% trained now, saw that multilingual parts are better than mistral now, and the english capabilities are close to mistral, except for hellaswag and arc, where its a little behind. all the benchmarks are on rwkv discor, and you can google the pro/cons of rwkv, though most of them are v4.

Thoughts?

  • vatsadev@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hmm, will have to check this stuff with the people on the rwkv discord server.

    V5 is stable at context usage, and V6 is trying to get better at using the context, so we might see improvement on this

    • MichalO19@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      If I understood correctly the original explanation on github for RWKV, BlinkDL agrees that softmax attention is very capable in theory but he thinks Transformers are not using it to full potential, so theoretically less capable architectures can beat them.

      This might be true, but I kind of doubt it. I played a bit with the 3B RWKV with a prompt like

      User: What is the word directly after "bread" in the following string "[like 20 random words]" 
      Assistant: The word directly after "bread" is "
      

      (note the preferred for RWKV ordering of a question before data, but I tested the other way around too) and unless the query word is very early in the string it gives me a random word. Even 1.3B transformer models seems to answer this correctly much more often (though not always correctly).