Not sure if that is possible but even if it is possible it will be very inefficient given each model would have to “learn from scratch” like learning grammar, sentence positioning, etc. You could potentially make a central model that is pre-trained on a large corpus dataset and fine-tuned for a specific task but then it is just a standard GPT-like model, which when aggregated like a mixture of experts or ensemble can potentially do what you are saying.
Combining multiple models that are trained independently into one huge model is not possible because the model learns something different for each task and due to the inherently stochastic nature of general LLM (which is desirable to aggregate information), unless you are just looking to purely “retrieve information” what you say is not possible with the current standard training regime.
And how is this related to this sub?