cancel
Showing results for 
Search instead for 
Did you mean: 

Issues converting YOLOv8 ONNX model to uint8 .nb for STM32MP25

20DeViL00
Associate

 

Hi everyone,

I’m currently working on deploying a YOLOv8 model on the STM32MP25 using ST Edge AI. Here’s what I’ve done so far and where I’m stuck.

Training & Export:

from ultralytics import YOLO
model = YOLO("yolov8n.pt") # medium variant
 
model.train( data=path, epochs=50, imgsz=256, project="Training", name="yolov8n_run", save=True )
 
# Export ONNX
model.export( format="onnx", imgsz=256, opset=10, dynamic=False, simplify=True, half=False, int8=False )

ONNX model details:

  • Input: [1, 3, 256, 256], float32

  • Output: [1, 5, 1344], float32

I successfully converted it to .nb using:

stedgeai generate --model best.onnx --target stm32mp25

This produces a model in float16 (I tried to run this but it did not work).

Problem:
STM32MP2 works with uint8 models, so I tried converting input/output to uint8:

 

 
stedgeai generate --model best.onnx --target stm32mp25 --input-data-type uint8 --output-data-type uint8

This fails during compilation with:

Fatal model compilation error: 512 Error during NBG compilation, model is not supported

Observation:

  • The normally .nb file is generating, without conversion.

  • Explicitly trying uint8 fails.

  • The error suggests the ST Edge AI compiler doesn’t support direct uint8 conversion for this ONNX model.

  • I've also tried ove the ST Edge AI developer cloud, but did not worked.

My Question:
Has anyone successfully converted a YOLOv8 ONNX model to uint8 for STM32MP25? Are there recommended steps for preparing the model for uint8 quantization?

Any advice on preprocessing, exporting, or using ST tools to get a uint8-ready model would be highly appreciated.

Thanks!
Aman

0 REPLIES 0