How many users do you have? If you’ve been keeping your inputs/outputs to gpt4, then you can probably use that to tune a your own model that will perform similarly.
The biggest issue you’re going to have is probably hardware.
LLMs are not cheap to run, and if you start needing multiple of them to replace OpenAI, your bill is going to be pretty significant just to keep the models online.
It’s also going to be tough to maintain all the infra you’ll need without a full time devops/mlops person.
How many users do you have? If you’ve been keeping your inputs/outputs to gpt4, then you can probably use that to tune a your own model that will perform similarly.
The biggest issue you’re going to have is probably hardware.
LLMs are not cheap to run, and if you start needing multiple of them to replace OpenAI, your bill is going to be pretty significant just to keep the models online.
It’s also going to be tough to maintain all the infra you’ll need without a full time devops/mlops person.