cancel
Showing results for 
Search instead for 
Did you mean: 

Question Regarding Performance Gap Between Custom and ST Pretrained YOLOv8 Models

VicChang
Associate II

Dear ST team,

I have followed the official tutorial:
How to deploy YOLOv8/YOLOv5 object detection models
and successfully built the deployment pipeline using Ultralytics yolov8n.pt.

However, I’ve noticed a significant performance gap between my converted model and the pre-optimized model provided by ST, in terms of inference speed under the same environment.

Am I missing any optimization steps during the export process?
Are there any recommended configurations or parameters to match the official model’s performance?

Thank you for your support.

 

截圖 2025-06-22 晚上11.00.10.png

 

截圖 2025-06-22 晚上10.58.49.png

2 REPLIES 2
Laurent FOLLIOT
ST Employee

Hello,
On which platform do you run the Yolov8n?
Regards,
Laurent


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

 

Dear Laurent,

I conducted a performance comparison between the YOLOv8n model provided by ST and a model I converted myself using the official Ultralytics pre-trained YOLOv8n, with a focus on inference latency evaluation.

No retraining or modification was performed — I used the standard Ultralytics YOLOv8n model as-is.
The model was quantized and evaluated following the official ST tutorial:

https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/object_detection/deployment/doc/tuto/How_to_deploy_yolov8_yolov5_object_detection.md

The evaluation was performed on the STM32MP257F-EV1 platform:

https://www.st.com/en/microcontrollers-microprocessors/stm32mp2-series.html

If you have any suggestions regarding the quantization process, deployment settings, or expected performance, I would greatly appreciate your input.

Thank you very much!