2025-08-08 10:17 AM
I am trying to run tflm with stm32f769 eval. I build the tflm as a static library and included it in my project. My app is crashing whille calling allocatetensors() function. Anyone having any luck with it? Also I couldn' find any official demo for running tflm with stm32.
I am struggling to run my basic model on stm32f769 board. It is failing in allocate tensors step.
My program crashes on the 9th iteration.
"AllocateTensors()->StartModelAllocation()->AllocateTfLiteEvalTensors()->InitializeTfLiteEvalTensorFromFlatbuffer()."
Any help here will be appreciated.
I couldn’t find any official documentation or tutorial which I could follow.
Solved! Go to Solution.
2025-08-11 12:09 AM
Hello @hadi81,
We do not support TFLM anymore,
You can find our documentation regarding support of NN here:
https://stedgeai-dc.st.com/assets/embedded-docs/index.html
Have a good day,
Julian
2025-08-11 12:09 AM
Hello @hadi81,
We do not support TFLM anymore,
You can find our documentation regarding support of NN here:
https://stedgeai-dc.st.com/assets/embedded-docs/index.html
Have a good day,
Julian
2025-08-19 10:50 AM
Issue seemed to be related to the boars. Changed my board and it started working fine.
2025-08-20 12:34 AM
Hello @hadi81,
Thanks for the update.
Could you share what materials you are using related to TFLM.
I pretty sure it is not supported by ST anymore, meaning that for future issue, we will not be able to help you.
Almost all AI on STM32 is now provided by the ST Edge AI Core.
Have a good day,
Julian