2025-04-23 11:03 AM
Under STM32CubeIDE I generated a neural network form my Nucleo board with X-CUBE-AI 9.0.0 from a quantized TFLite model of a sine function. I followed the same steps as I did about one year ago with X-CUBE-AI 8.1.0 (at the time it worked) but this time, when I perform inference, the network answers with constant output independent on its input. My steps are essentially these:
- Create a project for my board (Nucleo F767ZI);
- Add the X-CUBE-AI 9.0.0 pack and the TFLite model;
- Accept all the optimizations that the IDE proposes;
- Activate the CRC;
- Generate code;
- Add in the main() the code to create the network / set the activations, input and output buffers / run the network; The code is almost identical to the example contained in the X-CUBE-AI documentation.
My development platform is a MacBook Pro with M3 chip and Sequoia OS. It is likely that I missed something, since one year passed and I possibly forgot to write down some details of the configuration, but I tried all the variations of the configurations that I could think about and nothing seems to work. I include my STM32CubeIDE project in attachment. If anyone is able to spot any error, I will be grateful.
2025-04-24 1:40 AM
Hello @PB1,
First, please install the version 10.0.0 of CubeAI as it integrates bug fixes of previous versions (always uses the last version).
Then here is what I suggest doing:
Basically, if you only want to validate your model on target (compare how the model behaves on your target), use the validation template.
If you want an example of running application that compute the inference time of a random input and send via serial some metrics, uses the system performance template
If you want a template where the AI_run function is already in place, but you have to edit the input and output function, select the application template.
Here I will take the system performance template:
Have a good day,
Julian