cancel
Showing results for 
Search instead for 
Did you mean: 

Problems with running an AI Model on the NUCLEO-N657X0-Q Board

Coop23
Associate

Hello,

I'm working with a NUCLEO-N657X0-Q Board and trying to run/debug a TensorFlow Lite model that has been quantized to int_8 to run on hardware. I've tried several approaches but every attempt leads to a dead-end of one error leading to three more. Any help with any of these approaches would be greatly appreciated.

 

What I've tried so far (every tool used is the most recent version - up to 07/01/25):

 

1. Building STM32 CubeIDE project in CubeMX using the X-CUBE-AI extension

I first tested the model by uploading and using Analyze Model in CubeMX, which gives good results, showing a correct number of weights, activations, layers (all running on HW). Validate On Desktop works and gives expected results, but Validate On Target throws the following errors:

E200 (ValidationError): stm32: Unable to bind the STM AI runtime with "network" c-model: []

E801 (HwIOError): InvalidFirmware

Besides this, generating the code and trying to build leads to several linker errors with the HAL Driver and Middlewares, which I managed to get past by linking and connecting source locations in the C/C++ Build and Settings project properties.

The problem I have now with the sample code is that the network never finishes running (LL_ATON_RT_DONE is never reached). I have no idea why this given loop runs forever and debugging within the functions doesn't tell me very much at all.

 

LL_ATON_RT_Init_Network(&NN_Instance_final_model);  // Initialize passed network instance object
do {
   /* Execute first/next step */
   ll_aton_rt_ret = LL_ATON_RT_RunEpochBlock(&NN_Instance_final_model);
   /* Wait for next event */
   if (ll_aton_rt_ret == LL_ATON_RT_WFE) {
      LL_ATON_OSAL_WFE();
   }
} while (ll_aton_rt_ret != LL_ATON_RT_DONE);

 

 

2. Using VSCode's STM32 CubeIDE extension

After several hours of navigating problems with the CubeCLT installing to weird locations I was able to successfully import a CMake file from CubeMX, but all of the auto-generated CMake files crash the project when I try to build or debug. The errors usually relate to no executable being created. I've tried changing CMakeLists.txt manually, but building still never gives me the .elf file to run the build.

[cmake] CMake Error at CMakeLists.txt:21 (add_executable):
[cmake]   No SOURCES given to target: final_model_test.elf

 

Update: after closing VSCode, I now can no longer 'Import CMake Project' or 'Create Empty Project', with VSCode stalling on 'Generating vscode_launch.json'

 

3. Starting with a N6 ModelZoo project as an outline (Audio Detection)

https://github.com/STMicroelectronics/stm32ai-modelzoo-services/tree/main/application_code/audio/STM32N6

This was a much more promising attempt and the example's model runs well, but any attempt to swap their model with mine causes the model to not fully connect. I'm guessing this is because they use a .onnx model and mine is .tflite, but I can't find any explanation on how to correctly swap the two, despite almost identical shapes and inputs/outputs. I tried just swapping the model file (network.c) with an identically named model and updating everything to point to the new network file. The problem with this is that this line within the AiDPULoadModel function initializes the neural network with 0 parameters for my model:

npu_get_instance_by_index(0, pxCtx->net_exec_ctx);

 

Any help with any of these approaches would be greatly appreciated. I understand that there is extremely limited example code for this model, but I'm hoping that running a very simple and small neural network shouldn't be too hard. Thank you in advance.

0 REPLIES 0