Questions Regarding TF Lite Micro support and NPU capabilities

Hi everyone,
I have been using the AMB82-mini for about a week now, and I love its capabilities.

My question is: I found that there is no “official” support for Tflite-micro for AmebaPro2 architecture, but I have used the Arduino library of Tflite micro for AmebaD architecture, and it’s working fine. I am just wondering if there is any performance compromise associated with using a non-supported library.

Are there any other ways to do customs Neural network inference on AMB82 other than Tflite-micro?

And another question I had is, I tried finding details on the onboard NPU present on the AMB82-mini but couldn’t find much information…I was in search if the NPU support floating point operations (FP32/16) or is it only for quantized (INT8/16) models…And how to ensure the board uses NPU for inference, to ensure faster response of models…

Thanks,
Ashish Bangwal

Quick Update,

I found information on the NPU (Intelligent Engine) in the datasheet for Amebapro-II.

The specifications are as follow :

Intelligent Engine

  • 0.384 TOPS
  • 384 MAC
  • Engine Precision: INT8/INT16

So It does only support quantized INT8/16 precision :+1:

1 Like

Greetings,

No responses :slightly_frowning_face: :point_up_2:

Can anyone help me find the SDK to offload computations to the NPU, Or a inference library supported by amebapro2 that can utilize NPU.

I found that rockchip have an SDK for neural network inference on NPU.

Do we have something similar for realtek amebapro2 architecture ?

Thanks