cancel
Showing results for 
Search instead for 
Did you mean: 

I encountered an issue while using the ST Edge AI Developer Cloud

yuhanglin114
Associate II

My development board is STM32MP257F-DK, and I want to use ST Edge AI Developer Cloud to convert. onnx models to. nb models. However, during the interface conversion of 'Optimize your model with STM32AI MPU Tool', an error message 'Error while generating optimized file. Generation does not contain any output' was reported. Afterwards, I will use the model provided by ST Model ZOO: st_yoloxn_d033w_025-416_qdq_int8_ort_detection_CCO_2017_Serson. onnx. I also encountered an error while optimizing. What is the reason behind it? A few days ago, I was able to successfully convert ssd_mobilenetv1_pt_coco_300_qdq_int8_ort_detection_CCO_2

 

The error message is as follows:

23ee2f2d-c0ae-4f95-b47f-f203317b97a2.png

 

5 REPLIES 5
yuhanglin114
Associate II

Next, I tried the conversion of the following three models again:

  1.   xinet_a75_picasso_muse_160_nomp_neural_style_transfer_COCO_2017_80_classes.tflite
  2.   st_yoloxn_d033_w025_416_int8_object_detection_COCO_2017_Person.tflite
  3.   ssdlite_mobilenetv3small_pt_coco_300_qdq_int8_object_detection_COCO_2017_80_classes.onnx

and they can all be converted to NB format normally!(My models are all from ST Edge AI Developer Cloud
Select from the "Pick a model from ST Model Zoo" on the right side of the website. From the name, it can be seen that it has been converted to int8 type, so I skipped quantification and went straight to Optimize)。

However, the model ’st_yoloxn_d033w_025,416_qdq_int8_ort_detection_CCO_2017_Cerson. onnx ‘ keeps reporting errors and cannot be converted!

I also want to deploy YOLO11N on MP257, but I encountered problems when converting to NB format:
I passed through:
model = YOLO(r"F:\deeplearning\yolo11n.pt")
model.export(format='onnx',simplify=True,imgsz=640,opset=12)
Convert PT format to onnx format.
Then, on the ST Edge AI Developer Cloud, the quantization is int8, but it still shows that the input and output are 32-bit. And there are also errors when optimizing...

16f09408-edce-466a-8315-7d41e4f1ff86.png

Using netron to view yolo11n_PerChannel_quant_random1.onx, it can be seen that it has been quantified:

12c9c5a5-0e88-40fe-ba35-61ae06764531.png

I have put the onnx model and quantized model of yolo11n in the 7z compressed file. If needed, you can download and view them

hamitiya
ST Employee

Hello @yuhanglin114 

Information on the top of ST Edge AI Developer Cloud is from the original model. It won't refresh if you switch from the original to the quantized one. To actualize it, you should go back to "Home" and start with the quantized one:

 

image.png

 

Model is then correctly quantized.

However, as you mentioned, quantized model is not able to run on MPUs. The source one runs correctly on STM32MP257F-DK.

I log the errors and will keep you in touch, since I am able to reproduce it.

Best regards,

Yanis

 

 

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
hamitiya
ST Employee

Hello,

To complete the previous message:

- I was able to generate NBG file without the quantization. 

Without NBG: 2348ms with ONNX Runtime 1.19.2

With NBG: 1127ms, running mainly on GPU and not NPU.

 

With the quantization, the mainly reason would be that we are using a recent version for quantization, which results in not supported 

 

The error generated by STM32 MPU tool is the following:

E [ops/vsi_nn_op_eltwise.c:op_check_add:466]Inputs/Outputs data type not support: ASYM UINT8, SYM INT8
E [vsi_nn_graph.c:setup_node:551]Check node[134] MULTIPLY fail

 

Best regards,

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

I didn't say that the source model can run on MP2. My question is why the official provided model cannot be converted to NB format properly

May I ask what model you are using?