2024-05-24 11:05 AM
Hi,
Currently, I want to use TFLM library to do the inference rather than Cube.AI. For me, what I am doing is:
1. download the latest tflm library
2. build it with commands: "make -f tensorflow/lite/micro/tools/make/Makefile TARGET=cortex_m_generic TARGET_ARCH=cortex-m33 OPTIMIZED_KERNEL_DIR=cmsis_nn microlite"
3. Create a new project in STM32IDE and copy above-built-in tensorflow and third-party folders into the project
4. include all needed header files in tensorflow and third-party folders
Is the above process correct? I tried for multiple times but it always show that "missing some files in tensorflow folder". Do you have any suggestions?
Thanks in advance
2024-05-24 11:24 AM
X-CUBE-AI inside STM32CubeMX can create for you a project using the tensoflow lite micro runtime
The snapshot that we use is not the latest one but at least it should give you a working project that you can then update with the latest sources.
2024-05-24 12:56 PM
Dear Daniel,
For the "tensorflow lite micro runtime", is it true that there is an analyze tool in CubeMX that can help see the system performance based on tflm runtime (as shown in the below picture).
My purpose is want to debug each piece of the TFLM codes, and to see how they are working. In this case, I want to know if any ways to import TFLM library into STM32 boards and debug it.
2024-05-27 12:03 AM
In the UI, with the TFLite Micro runtime you are able to do an automatic validtion on the target that will generate, compile and flash a validation program on the target and then run the "validate" command of the tool.
With the validate command you'll get the inference time and a comparison of the inference results between the code running on the target and a python run.
You can also generate the system performance project that will display just the inference time on a serial port
Regards