cancel
Showing results for 
Search instead for 
Did you mean: 

input, output, and model types not correctly identified using ST Edge AI Developer Cloud platform?

LiamChen
Associate II

After I uploaded my own model on the ST Edge AI Developer Cloud platform, I further quantized and optimized my model. However, when I selected my own model and after a few minutes of analysis and identification, the platform could not identify the model's input, output, and model type, so that I could not proceed to the next step. If possible, please let me know the reason and help me solve this problem. Thank you very much.

LiamChen_0-1745316041177.png

 

5 REPLIES 5
hamitiya
ST Employee

Hello @LiamChen 

ST Edge AI Developer Cloud performs an analysis of your model once you load it from the homepage.

If it shows no information once you've been redirected, it means that either the analysis time was too long, or your model is not compatible with ST Edge AI Core.

To have more information, you can select STM32 MCU platform and go to "Optimize" panel. You will retrieve the "default" run performed at the first step, and the possible output associated.

 

Best regards,

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

@hamitiya wrote:

Hello @LiamChen 

ST Edge AI Developer Cloud performs an analysis of your model once you load it from the homepage.

If it shows no information once you've been redirected, it means that either the analysis time was too long, or your model is not compatible with ST Edge AI Core.

To have more information, you can select STM32 MCU platform and go to "Optimize" panel. You will retrieve the "default" run performed at the first step, and the possible output associated.

 

Best regards,

Yanis


I was able to perform quantization normally on the platform and returned the following results in terminal.

Executing with: {'model': '/tmp/quantization-service/0f017da0-e295-40e4-a363-00c5a6c4bcec/lichi.onnx', 'data': None, 'disable_per_channel': False}
Preprocess the model to infer shapes of each tensor
Quantizing model...
lichi_QDQ_quant.onnx model has been created
None

2025-04-22 13:00:25.375923: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-04-22 13:00:25.403768: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.

It seems that the quantization is successful, but the input, output and model type are still not analyzed. At the same time, when using netron to view the graph, the input and output are not re-quantized. And when entering the next step of optimization, the model that has just been quantized cannot be selected for optimization. The optimization result will eventually fail.

LiamChen_1-1745327875075.png

Error while generating optimized file. Generation does not contain any output

 

hamitiya
ST Employee

Thanks for your update.

Is it possible to share your model ? I will then investigate further

 

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

@hamitiya wrote:

Thanks for your update.

Is it possible to share your model ? I will then investigate further

 

Yanis


We are very sorry that we cannot provide the model without authorization due to the rights of relevant commercial privacy. But it is certain that our model is based on the YOLOv5 framework. I remember seeing some related application cases on STM32 MPU on Google. If you need me to provide other details, I will do my best. Thank you for your reply.

hamitiya
ST Employee

No problem it is fully understandable.

Firstly I would recommend you, if you want to work on STM32 MCUs, to look at layers that are supported by ST Edge AI Core for ONNX: ONNX toolbox support

If you want to work on STM32 MPU instead, even if quantization didn't work, you can give it a try when benchmarking. It will run through ONNX Runtime.

 

If possible, don't hesitate to share what is visible when performing optimization, especially the Terminal output:

 

hamitiya_0-1745330850415.png

 

 

Best regards,

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.