• 1 Post
  • 3 Comments
Joined 1 year ago
cake
Cake day: November 23rd, 2023

help-circle

  • I agree that ONNX would be the right solution if you need to serve 100M inference requests. However, my code is not for that case; most likely it will serve up only 100K requests and will be either thrown away or completely re-engineered for the next iteration of requirements. Additionally, it’s not just about the binary model file; there is pre-processing involved, data needs to be pulled from an internal API, the inference needs to be run, and finally, the results need to be post-processed.

    I know how to convert it to fast API, but was curious if there is any solution where I can parameterize and sever an inference cell code with low effort.