Showing results for 
Search instead for 
Did you mean: 

INTERNAL ERROR: Value of zero not correct

Associate II

Hello, everyone! I'm learning to deploy my target detection model using cubeai. My model is trained using pytorch and I found that cubeai doesn't support .pth file conversion so I'm trying to convert my model to onnx format. Since the size of my model did not match the requirements of the development board, I performed a quantization operation. Noting that cubeai officially recommends quantization in onnx format, I did so. However, when I analyze the model with cubeai, the following error occurs (the model before quantization can be analyzed normally). Can anyone help me to see what's going on?



Associate II

By the way, quantized onnx models are normal to run with onnxruntime.

ST Employee

STM32Cube.AI supports ONNX quantization in QDQ format per channel and INT8.

You can look in the embedded documentation in the quantization chapter for sample script using ONNX.

That said could you share the model with us so we could analyze what's going on.


Oops, I didn't see that the model was attached

Aha, thank you very much for your reply, that's exactly the procedure I refer to in the quantization section of the embedded documentation for the quantization process. My model works fine with onnxruntime.InferenceSession. Attached is a test image



I tried to quantized with the STM32Cube.AI Developer Cloud also and I ran into a similar error of inconsistent shape of Bias. 

I've created a bug for the dev team

So this error is a cubeai problem? Is there a solution, or will there be one in the near future? I need to move forward with this demo as soon as possible for a report.

It is likely a problem in CubeAI in the way we interpret the shape of the bias.


Do you have any alternative? Or what would you recommend I do to avoid this problem?

Hello, @fauvarque.daniel Has this problem been solved, please? Or what other things would you recommend I try? Looking forward to your reply! Thanks.