Perhaps they scaled it up and combined it with LLMs
Given their recently published paper, they probably figured out a way to get GPT to learn their own reward function somehow.
Perhaps some chicken little board members believe this would be the philosophical trigger towards machine intelligence deciding upon its own alignment.
Q* is just a reinforcement learning technique.
Perhaps they scaled it up and combined it with LLMs
Given their recently published paper, they probably figured out a way to get GPT to learn their own reward function somehow.
Perhaps some chicken little board members believe this would be the philosophical trigger towards machine intelligence deciding upon its own alignment.