cancel
Showing results for 
Search instead for 
Did you mean: 

Want to deploy TFLM library on STM32L562E

haoliu0027
Associate

Hi,

Currently, I want to use TFLM library to do the inference rather than Cube.AI. For me, what I am doing is:

1. download the latest tflm library

2. build it with commands: "make -f tensorflow/lite/micro/tools/make/Makefile TARGET=cortex_m_generic TARGET_ARCH=cortex-m33 OPTIMIZED_KERNEL_DIR=cmsis_nn  microlite"

3. Create a new project in STM32IDE and copy above-built-in tensorflow and third-party folders into the project

4. include all needed header files in tensorflow and third-party folders

 

Is the above process correct? I tried for multiple times but it always show that "missing some files in tensorflow folder". Do you have any suggestions?

Thanks in advance

 

3 REPLIES 3
fauvarque.daniel
ST Employee

X-CUBE-AI inside STM32CubeMX can create for you a project using the tensoflow lite micro runtime

The snapshot that we use is not the latest one but at least it should give you a working project that you can then update with the latest sources.


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Dear Daniel,

For the "tensorflow lite micro runtime", is it true that there is an analyze tool in CubeMX that can help see the system performance based on tflm runtime (as shown in the below picture).

My purpose is want to debug each piece of the TFLM codes, and to see how they are working. In this case, I want to know if any ways to import TFLM library into STM32 boards and debug it. 

 

 

haoliu0027_0-1716580414064.png

 

In the UI, with the TFLite Micro runtime you are able to do an automatic validtion on the target that will generate, compile and flash a validation program on the target and then run the "validate" command of the tool.

With the validate command you'll get the inference time and a comparison of the inference results between the code running on the target and a python run.

You can also generate the system performance project that will display just the inference time on a serial port

Regards


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.