I did some ratings on chatbot arena and i noticed one things. When an ai honestly said “i dont know that” or “i dont understand that” it was always better received by me and felt kinda smarter.

Does some dataset or lora train on that? Or is “knowing about not knowing” too hard to achieve?

  • __SlimeQ__@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    my personal lora does this just because it was trained on actual human conversations. it’s super unnatural for people to try answering just any off the wall question, most people will just go like “lmao” or “idk, wtf” and if you methodically strip that from the data (like most instruct datasets do) then it makes the bots act weird as hell

    • itsmeabdullah@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Do you have experience with training Loras off private conversational data? If so can I DM you, I have a huge favour to ask with regards to training, if you don’t mind.

    • False_Grit@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Typical ChatGPT user:

      “I want my bots to be more human!”

      Also them:

      “Hi, this is my first time meeting you. What is the meaning of life and why do I sob uncontrollably when I touch pickles?”

  • Zealousideal_Nail288@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    instead of don’t know. we should learn them to ask further questions and work from thatme: my shitbox broke downai: what do you mean by shitbox?me: my old car which just decided to stop workingai: what should i do about it?me: don’t know can you cheer me up?ai: ofcurse …

    that is what i would call intelligence “dont know that” is more like what a chat-bot from the 2000 or even 80s would say. that was what i was planing on writing but ELIZA is actually from 1967