cancel
Showing results for 
Search instead for 
Did you mean: 

Custom model object detection on STM32N6-DK error

mls
Associate III

Hi,

I have build a custom model based on st yolox model.
This model have an input of 480x480. Intput int8 and output float.
Five classes for vehicles detection
Quantized and tested on stm32N6570-DK.
Now i'm trying to add my model on this st example
https://github.com/STMicroelectronics/STM32N6-GettingStarted-ObjectDetection/tree/main
I'm following this tutorial to deploy my model
https://github.com/STMicroelectronics/STM32N6-GettingStarted-ObjectDetection/blob/main/README.md#application-build-and-run---dev-mode

I'm using stm32cubdeIDE to program the card
After i programmed the network_data.hex, i'm trying to deploy and debug the application.
But i have this error at this step : app_postprocess_init
The parameters

2025-11-17_16h12_42.png

2025-11-17_17h08_34.png

 

Do you have en idea how i can resolve this ?

7 REPLIES 7
Julian E.
ST Employee

Hi @mls,

Did you edit the app_config.h and the post processing for your model needs?

Please look at this document:

STM32N6-GettingStarted-ObjectDetection/Doc/Deploy-your-tflite-Model-STM32N6570-DK.md at main · STMicroelectronics/STM32N6-GettingStarted-ObjectDetection

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
mls
Associate III

Hi @Julian E.

I found the solution for my first issue ! I quantized a model in int8 (input) and float (output)....not supported in your example.
I quantized my model in uint8/int8 

#define POSTPROCESS_OD_ST_YOLOX_UI      (106)  /* ST YoloX postprocessing; Input model: uint8; output: int8          */


Downloaded the model in the card and launch the debug
Now stuck in Run Inference

mls_0-1763561213766.png

 

A question ? does the model need to be quantized in input in UINT8 or it could be INT8 ?


Hello @mls,

 

I think your issue is maybe due to a conflict between the input and output type your model use, you used in the generate command and what the firmware expect (as described in app_config.h)

 

Can you share the exact command you used (the generate) and your app_config.h

 

Also, if you can share your model, it would be helpful (in private message if you want).

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hi @Julian E. 

Unfortunately for the moment i don't have access to my GPU server (thank you OVH ...) that i used to train the model.
That i did today is to quantize my model (h5 file) with ST Edge AI Developer Cloud.

mls_1-1763567476705.png

All the process is ok 

 

mls_2-1763567546276.png

See here the app_config.h

I will send you the model by private message

 

 

 

 

 

Hi @mls,

 

When I deploy your model, it works for a few inference before freezing, is that what you also observe?

I am trying to figure out what is happening

 

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
mls
Associate III

Hi @Julian E. ,

Yes there are probably some inference but nothing that is displaying on the lcd screen.
Let me know if you make some progress on that. We are also testing Alif solution to make a choose on our futur platform to replace our nvivia solutions ...

Regards,

Mickaël

mls
Associate III

Hi @Julian E. ,

Did you make some progress ?

This morning, tried with st_yolo_x_nano_480_1.0_0.25_3_st_int8.tflite original model. Same issue ...