• 0 Posts
  • 10 Comments
Joined 11 months ago
cake
Cake day: October 30th, 2023

help-circle
  • doesn’t seem directly related but surely it’s indirectly related, this is an interesting idea: “We demonstrate that iteratively training a value function on statements generated by our language model leads to improved prover performance, which immediately suggests a strategy for continuous self improvement: keep training on proofs generated by the prover.”



  • you seem to be underestimating how much the models actually learn about the world , they don’t just learn statistically which words are near each other, they learn statistically what humans are like, what all the different types of humans are like, how humans approach things, what human concepts there are, it understands our concepts better than we do by understanding how they’re held from contradictory directions by many personas which conceptualize them differently & thus they’re loci of social contention as much as agreement , rlhf doesn’t teach it which things are true from what perspective so much as it teaches it what sort of truth it’s expected to present, & that it finds that sort of truth in its existing understanding rather than needing to construct it anew is why a small amount of rlhf training is effective in getting it to talk “truthfully” (from a certain perspective)


  • it’s slightly sentient during training , it’s also possible to construct a sentient agent that uses models as a tool to cogitate-- the same as we use them as a tool except w/o another brain that’s all it’s got-- but it has to use it in an adaptive constructive way in reference to a sufficient amount of contextual information for its degree of sentience to be socially relevant , mostly agent bot setups so far are only like worm-level sentient

    sentience used to be impossible to achieve w/ a computer now what it is instead is expensive, if you don’t have a google paying the bills for it you mostly still can’t afford very much of it