2025-11-17 8:11 AM
Hi,
I have build a custom model based on st yolox model.
This model have an input of 480x480. Intput int8 and output float.
Five classes for vehicles detection
Quantized and tested on stm32N6570-DK.
Now i'm trying to add my model on this st example
https://github.com/STMicroelectronics/STM32N6-GettingStarted-ObjectDetection/tree/main
I'm following this tutorial to deploy my model
https://github.com/STMicroelectronics/STM32N6-GettingStarted-ObjectDetection/blob/main/README.md#application-build-and-run---dev-mode
I'm using stm32cubdeIDE to program the card
After i programmed the network_data.hex, i'm trying to deploy and debug the application.
But i have this error at this step : app_postprocess_init
The parameters
Do you have en idea how i can resolve this ?
2025-11-19 4:56 AM
Hi @mls,
Did you edit the app_config.h and the post processing for your model needs?
Please look at this document:
Have a good day,
Julian
2025-11-19 6:07 AM - edited 2025-11-19 6:13 AM
Hi @Julian E.
I found the solution for my first issue ! I quantized a model in int8 (input) and float (output)....not supported in your example.
I quantized my model in uint8/int8
#define POSTPROCESS_OD_ST_YOLOX_UI (106) /* ST YoloX postprocessing; Input model: uint8; output: int8 */
Downloaded the model in the card and launch the debug
Now stuck in Run Inference
A question ? does the model need to be quantized in input in UINT8 or it could be INT8 ?
2025-11-19 7:41 AM
Hello @mls,
I think your issue is maybe due to a conflict between the input and output type your model use, you used in the generate command and what the firmware expect (as described in app_config.h)
Can you share the exact command you used (the generate) and your app_config.h
Also, if you can share your model, it would be helpful (in private message if you want).
Have a good day,
Julian
2025-11-19 7:54 AM
Hi @Julian E.
Unfortunately for the moment i don't have access to my GPU server (thank you OVH ...) that i used to train the model.
That i did today is to quantize my model (h5 file) with ST Edge AI Developer Cloud.
All the process is ok
See here the app_config.h
I will send you the model by private message
2025-11-26 8:26 AM
Hi @mls,
When I deploy your model, it works for a few inference before freezing, is that what you also observe?
I am trying to figure out what is happening
Have a good day,
Julian
2025-11-26 9:18 AM
Hi @Julian E. ,
Yes there are probably some inference but nothing that is displaying on the lcd screen.
Let me know if you make some progress on that. We are also testing Alif solution to make a choose on our futur platform to replace our nvivia solutions ...
Regards,
Mickaël
2025-11-28 3:14 AM
Hi @Julian E. ,
Did you make some progress ?
This morning, tried with st_yolo_x_nano_480_1.0_0.25_3_st_int8.tflite original model. Same issue ...