cancel
Showing results for 
Search instead for 
Did you mean: 

Custom trained Yolo11n / Yolov8 Stm32n6 Nucleo deployment

LBO
Associate II

Dear ST community,

 

We've been evaluating STM32N6 NPU since early january. We aim to deploy yolov8n / yolov11n inside the NPU.

So far we've been able to use modelzoo provided by ST. We weren't able to do the same with yolov8 or yolov11n we trained using ultralytics scripts.

 

Here the procedure we applied : 

- Download repo modelzoo and modelzoo-services

- Verify that stm32ai_main.py associated with deployment_n6_yolo11n_config.yaml does work (it does, we just modify it to target Nucleo board)

- Download / train yolov11n model using ultralytics framework use same input dimensions; export to Tflite format

- Use  ST script to quantize tflite mode (tflite_quant.py) 

- Import the quantized tflite model and modify the .yaml to use our trained model instead of the ST version.

- Let the design flow execute

 

After that, nothing happen on the board. I've opened STM32Cube to see what happens on debug level. When reaching Run_Inference the NPU sort of "stall" (infinite loop in hardware error some sort)

 

Attached to this post :

- Yolov11n model we trained / quantize

- Yolov8n model we trained / quantize

- .yaml configuration we use to deploy on Nucleo

 

Has anyone managed to deploy a yolov11n on this target ?

 

Best regards,

 

 

4 REPLIES 4
LBO
Associate II

Dear ST community,

 

To provide more details about the situation, I've decided to follow the tutorial provided here : https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/object_detection/docs/tuto/How_to_deploy_yolov8_yolov5_object_detection.md 

Following procedure to build a model and  "Option 2" procedure for quantization.

You can find attached the scripts I used to :

- build_model.py : Build the model using ultralytics fork

- user_config.yaml : Config file for quantization

- yolov8n_256_quant_pc_ui_od_coco.tflite : the quantize model i got in the output

 

So far it seems that the toolchain is not ready for yolov8 deployment; maybe this require a new release from ST to make it work ?

 

Best regards,

 

Hi @LBO,

 

The tutorial was outdated, but I see that it was updated 5 days ago, I will try to follow it and let you know.

 

What do you mean exactly by "So far it seems that the toolchain is not ready for yolov8 deployment; maybe this require a new release from ST to make it work ?"

 

Do you get an error from the compiler when trying to deploy it? 

 

We have yolov8 in the model zoo, so everything should be available to deploy it.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello Julian, thank you for your answer.

To give you precisions, the compilation / toolchain does produces the required C files, build application and flash it on the Nucleo board. However, when booting the board in flash configuration I don' get an output on the USB UVCL (I usually wait for about 20 seconds).

When I open the application project via STM32CubeProgrammer, start the board in Development mode and run in Debug; I do see the application launching but when reaching Run_Inference() the application fall in infinite loop. I usually place a break point before that to take a closer look. It's fine when using modelzoo but with trained mode this struggle.

Capture d’écran 2026-01-27 172804.png

I am discussing with the model zoo and taking a look at your models.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.