2019-02-05 02:37 AM
2019-02-20 02:39 PM
Hello,
Currently X-Cube-AI 3.3.0 provides only a support for the floating point model (float32). It provides its own STM32 optimized libraries (mainly based on a ARM Cortex-M with FPU single precision). ARM CMSIS-NN package is only for the fixed-point support (q8, q16) and to use the Cortex-M DSP/SIMD extensions. Future X-Cube-AI release will have a fixed-point support which will be based on the low-level ARM CMSIS-NN functions.
X-Cube-AI vs ARM-NN-SDK?
ARM-NN is actually only an inference engine for CPUs, GPUs and NPUs (ARM Cortex CPUs, ARM Mali GPU,...), it is currently not compatible with Cortex-M microcontrollers. There is not an official Cortex-M support through the ARM-NN-SDK (or derived) on the top of the CMSIS-NN library (model translator or convertor point of view) to generate an optimized C-model implementation.
Best Regards,
Jean-Michel
2019-02-21 01:22 AM
2019-04-14 05:38 AM
What about Cube-AI vs CMSIS-NN from performance perspective? CMSIS NN claims 4x runtime improvement due to, as you've already said, fixed point calculations. On the other hand, their library support seems messy: you need manually add layer by layer in your source code. I'd like to see a simple comparison of memory and FPS for MNIST model with CubeAI and CMSIS NN backends.
Thank you.
2019-04-22 03:59 AM
2019-07-20 06:06 AM
In X-CUBE-AI 4.0 there is support for quantized TensorFlow Lite / Keras. But will it use the actually use the SIMD instructions of an M4F if using 8-bit quantization (like CMSIS-NN)?