cancel
Showing results for 
Search instead for 
Did you mean: 

X-CUBE-AI 9.0.0 generates a network for which inference does not work

PB1
Associate III

Under STM32CubeIDE I generated a neural network form my Nucleo board with X-CUBE-AI 9.0.0 from a quantized TFLite model of a sine function. I followed the same steps as I did about one year ago with X-CUBE-AI 8.1.0 (at the time it worked) but this time, when I perform inference, the network answers with constant output independent on its input. My steps are essentially these:

- Create a project for my board (Nucleo F767ZI);
- Add the X-CUBE-AI 9.0.0 pack and the TFLite model;
- Accept all the optimizations that the IDE proposes;
- Activate the CRC;
- Generate code;
- Add in the main() the code to create the network / set the activations, input and output buffers / run the network; The code is almost identical to the example contained in the X-CUBE-AI documentation.

My development platform is a MacBook Pro with M3 chip and Sequoia OS. It is likely that I missed something, since one year passed and I possibly forgot to write down some details of the configuration, but I tried all the variations of the configurations that I could think about and nothing seems to work. I include my STM32CubeIDE project in attachment. If anyone is able to spot any error, I will be grateful.

1 REPLY 1
Julian E.
ST Employee

Hello @PB1,

 

First, please install the version 10.0.0 of CubeAI as it integrates bug fixes of previous versions (always uses the last version).

 

Then here is what I suggest doing:

  1. Open STM32CubeMX and select your board with the board selector
  2. Initialize peripherals by default
  3. Activate X Cube AI 10.0.0 and select either the validation, system performance or application template.

Basically, if you only want to validate your model on target (compare how the model behaves on your target), use the validation template.

If you want an example of running application that compute the inference time of a random input and send via serial some metrics, uses the system performance template

If you want a template where the AI_run function is already in place, but you have to edit the input and output function, select the application template.

 

Here I will take the system performance template:

JulianE_0-1745483502024.png

 

  • Then, click No when asked to automatically fix clock and peripherals for better performance, I believe, sometimes it can create issues.
  • Then browse for your model
  • Click on analyse to at least make sure that your model is compatible
  • Generate the code with your toolchain/IDE
  • Open STM32CubeIDE -> Build -> Flash
  • Open tera term to see if you get the output

JulianE_1-1745483986071.png

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.