cancel
Showing results for 
Search instead for 
Did you mean: 

Help with deploying multi-class YOLOv8n (80-class) model on STM32N6-DK

Doouv
Associate

Hello community,

I am currently working on deploying a full YOLOv8n (80-class COCO) model on the STM32N6570-DK and integrating it with the Object Detection example from the STM32 AI Model Zoo Service.

I closely followed this tutorial:
How to deploy YOLOv8/YOLOv5 Object Detection models

and used the official quantization scripts here:
YOLOv8 Quantization Scripts


Model & Quantization Setup

  • Base model: yolov8n.pt (Ultralytics official)

  • Exported to: SavedModel format

  • Quantization: done per STM32AI model zoo tutorial

  • Input: uint8 (0–255)

  • Output tested as:

    1. uint8 input / float output

    2. uint8 input / int8 output

Case Input Type Output Type Result

(1)

uint8floatIncorrect inference — bounding boxes & class scores both wrong
(2)uint8int8Bounding boxes coordinates always same (incorrect), classification correct

After debugging case (2), I found:

  • The bounding box coordinates remain constant across frames and detections, even before post-processing.

  • The output tensors are quantized correctly, but the model’s raw outputs (prior to postprocess) seem fixed.

  • The default post-process used in the deployment script (deploy.py) is set for INT8 output models (POSTPROCESS_OD_YOLO_V8_UI). (Permalink)

When we changed this manually to POSTPROCESS_OD_YOLO_V8_UF (for float output) in the STM32N6 Object Detection app and reflashed, the results remained incorrect.


Thank you in advance for your help and guidance

 

0 REPLIES 0