cancel
Showing results for 
Search instead for 
Did you mean: 

Issues converting YOLOv8 ONNX model to uint8 .nb for STM32MP25

20DeViL00
Associate II

 

Hi everyone,

I’m currently working on deploying a YOLOv8 model on the STM32MP25 using ST Edge AI. Here’s what I’ve done so far and where I’m stuck.

Training & Export:

from ultralytics import YOLO
model = YOLO("yolov8n.pt") # medium variant
 
model.train( data=path, epochs=50, imgsz=256, project="Training", name="yolov8n_run", save=True )
 
# Export ONNX
model.export( format="onnx", imgsz=256, opset=10, dynamic=False, simplify=True, half=False, int8=False )

ONNX model details:

  • Input: [1, 3, 256, 256], float32

  • Output: [1, 5, 1344], float32

I successfully converted it to .nb using:

stedgeai generate --model best.onnx --target stm32mp25

This produces a model in float16 (I tried to run this but it did not work).

Problem:
STM32MP2 works with uint8 models, so I tried converting input/output to uint8:

 

 
stedgeai generate --model best.onnx --target stm32mp25 --input-data-type uint8 --output-data-type uint8

This fails during compilation with:

Fatal model compilation error: 512 Error during NBG compilation, model is not supported

Observation:

  • The normally .nb file is generating, without conversion.

  • Explicitly trying uint8 fails.

  • The error suggests the ST Edge AI compiler doesn’t support direct uint8 conversion for this ONNX model.

  • I've also tried ove the ST Edge AI developer cloud, but did not worked.

My Question:
Has anyone successfully converted a YOLOv8 ONNX model to uint8 for STM32MP25? Are there recommended steps for preparing the model for uint8 quantization?

Any advice on preprocessing, exporting, or using ST tools to get a uint8-ready model would be highly appreciated.

Thanks!
Aman

4 REPLIES 4
Julian E.
ST Employee

Hello @20DeViL00,

 

You need to use an int8 model.

In your command:

model.export( format="onnx", imgsz=256, opset=10, dynamic=False, simplify=True, half=False, int8=True )

 

The AI MP2 team also recommend to use Tflite if you can, as it seems to work better in most case:

model.export( format="tflite", ... , int8=True )

 

Then you can do your generate.

 

The --input-data-type uint8 --output-data-type uint8 only add layers to fit the type you want in input and output. It does not change the model.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

 

Hi @Julian E.,

Thanks for the clarification regarding the int8 export. I tried following your suggestion. Here’s what I did:

model.train(data=path, epochs=100, imgsz=256, project="Training", name="yolov8n_run", save=True, int8=True)
model.export( format="onnx", imgsz=256, opset=10, dynamic=False, simplify=True, half=False, int8=True )

Then I ran the ST Edge AI generate command:

stedgeai generate --model best.onnx --target stm32mp25 --input-data-type uint8 --output-data-type uint8

But I’m hitting this error:

ST Edge AI Core v2.2.0-20266 2adc00962 make: *** [.../export_ovxlib/makefile.linux:53: vnn_pre_process.o] Error 1 E 17:34:11 Fatal model compilation error: 512 E 17:34:11 ('Fatal model compilation error: 512', 'nbg_compile') Error during first compilation. Retrying with other settings... make: *** [.../export_ovxlib/makefile.linux:53: vnn_pre_process.o] Error 1 E 17:34:13 Fatal model compilation error: 512 E 17:34:13 ('Fatal model compilation error: 512', 'nbg_compile') E010(InvalidModelError): Error during NBG compilation, model is not supported

It seems the model isn’t compiling for STM32MP2. 

Thanks again for the guidance!

Aman Sharma

Hello @20DeViL00,

 

It seems that in ONNX the arg int8 do not work ( https://docs.ultralytics.com/modes/export/#export-formats).

 

So you can try to use TFLITE instead or get your non quantized model, then quantize it manually before doing the generate.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hi @Julian E. 


Sure ill will try to export model with TFLITE.

Thanks for the help

Aman Sharma