I am currently researching ways to export models that I trained with Pytorch on a GPU to a microcontroller for inference. Think CM0 or a simple RISC-V. The ideal workflow would be to export c-sourcecode with as little dependencies as possible, so that it is completely platform agnostic.

What I noticed in general is that most edge inference frameworks are based on tensorflow lite. Alternatively there are some closed workflows, like Edge Impulse, but I would prefer locally hosted OSS. Also, there seem to be many abandoned projects. What I found so far:

Tensorflow lite based

Pytorch based

  • PyTorch Edge / Executorch Sounds like this could be a response to tflite, but it seems to target intermediate systems. Runtime is 50kb…
  • microTVM. Targeting CM4, but claims to be platform agnostic.

ONNX

  • DeepC. Open source version of DeepSea. Very little activity, looks abandoned
  • onnx2c - onnx to c sourcecode converter. Looks interesting, but also not very active.
  • cONNXr - framework with C99 inference engine. Also interesting and not very active.

Are there any recommendations out of those for my use case? Or anything I have missed? It feels like there no obvious choice for what I am trying to do.

Most solutions that seem to hit the mark look rather abandoned. Is that because I should try a different approach or is the field of ultra-tiny-ml OSS in general not so active?

  • DigThatData@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    just wanted to say that this is a domain i don’t have a lot of experience with and I would be very interested if you keep us updated with your findings as you explore the different options available.

  • Complex-Indication@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I think you are spot on in your assessment: tflite micro was the first production ready framework to deploy NN to microcontrollers and by now still is the most popular/streamlined. Realistically speaking, you probably should evaluate which path would bring you to your goal faster: converting pytorch to onnx and then to tflite micro route or using less known and maintained project to run onnx model directly.

    One question: Since you mentioned Edge Impulse, why would you want to go self-hosted OSS route?

    • cpldcpu@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Realistically speaking, you probably should evaluate which path would bring you to your goal faster: converting pytorch to onnx and then to tflite micro route or using less known and maintained project to run onnx model directly.

      Indeed, I am currently looking at the onnx based tools. Architecture aware training is of course also quite important, not sure how to cover that yet.

      Since you mentioned Edge Impulse, why would you want to go self-hosted OSS route?

      Well, first of all this is meant as a learning exercise for me, so I would like to be in control of every step. And then it is probably general mistrust vs. SaaS.