Paper: https://arxiv.org/abs/2311.02462
Abstract:
We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors. This framework introduces levels of AGI performance, generality, and autonomy. It is our hope that this framework will be useful in an analogous way to the levels of autonomous driving, by providing a common language to compare models, assess risks, and measure progress along the path to AGI. To develop our framework, we analyze existing definitions of AGI, and distill six principles that a useful ontology for AGI should satisfy. These principles include focusing on capabilities rather than mechanisms; separately evaluating generality and performance; and defining stages along the path toward AGI, rather than focusing on the endpoint. With these principles in mind, we propose ‘Levels of AGI’ based on depth (performance) and breadth (generality) of capabilities, and reflect on how current systems fit into this ontology. We discuss the challenging requirements for future benchmarks that quantify the behavior and capabilities of AGI models against these levels. Finally, we discuss how these levels of AGI interact with deployment considerations such as autonomy and risk, and emphasize the importance of carefully selecting Human-AI Interaction paradigms for responsible and safe deployment of highly capable AI systems.
Not designed or intended to- is not the same “fundementally unable”. There are quite simplistic architectures that are very able of updating their weights, which does not make them AGI or any more intelligent. The discussion is about general capability in intellectual tasks, not the training mechanisms.
I more so meant that to learn something new the model would have to update its own weights (I have my reasoning for this in another reply in this thread).
When I said “fundamentally unable to” I meant that current LLM architectures do not have the capability to update their own weights (although I probably should’ve worded that a bit differently)
They don’t have it because it wasn’t programmed into it, because it’s risky business (see chatbot Tay), not because it’s currently impossible. There’s nothing preventing you from running backprop weight updates based on user interactions, e.g. with reinforcement from user sentiment.