I am currently researching ways to export models that I trained with Pytorch on a GPU to a microcontroller for inference. Think CM0 or a simple RISC-V. The ideal workflow would be to export c-sourcecode with as little dependencies as possible, so that it is completely platform agnostic.
What I noticed in general is that most edge inference frameworks are based on tensorflow lite. Alternatively there are some closed workflows, like Edge Impulse, but I would prefer locally hosted OSS. Also, there seem to be many abandoned projects. What I found so far:
Tensorflow lite based
- Tensorflow lite
- TinyEngine from MCUNet. Looks great, targeting ARM CM4.
- CMSIS-NN. ARM centric. Examples. They also have an example for a pytorch to tflite converter via onnx
- TinyMaix. Very minimalistic, can also be used on RISC-V
- nnom
Pytorch based
- PyTorch Edge / Executorch Sounds like this could be a response to tflite, but it seems to target intermediate systems. Runtime is 50kb…
- microTVM. Targeting CM4, but claims to be platform agnostic.
ONNX
- DeepC. Open source version of DeepSea. Very little activity, looks abandoned
- onnx2c - onnx to c sourcecode converter. Looks interesting, but also not very active.
- cONNXr - framework with C99 inference engine. Also interesting and not very active.
Are there any recommendations out of those for my use case? Or anything I have missed? It feels like there no obvious choice for what I am trying to do.
Most solutions that seem to hit the mark look rather abandoned. Is that because I should try a different approach or is the field of ultra-tiny-ml OSS in general not so active?
Indeed, I am currently looking at the onnx based tools. Architecture aware training is of course also quite important, not sure how to cover that yet.
Well, first of all this is meant as a learning exercise for me, so I would like to be in control of every step. And then it is probably general mistrust vs. SaaS.