Yup, it definitely will help speed up inference on models you can get working.
My personal recommendation is to start with something like PyTorch Mobile or Tensorflow Lite (whichever you prefer). The main benefit is that you can take a model in PyTorch and compile it down to a representation that will use the NN API
You can pretty quickly use the examples in this repo to try out running a language model like BERT. It will also show you the process of converting a model and running it in your phone.
https://github.com/pytorch/android-demo-app
If you’re going after maximum performance on a particular model then it might make more sense to learn the NN API directly try to build it yourself. Personally I would probably try to work with the open source community to add an NN API backend in llama.cpp
I installed centaur tabs and accepted I can’t see two tabs but I can see my whole buffer :)