Hi all,
I’m working with a simple test ONNX model (test_onnx_file_v1.onnx) that I quantized using the ST Edge AI Developer Cloud into a per-tensor quantized version (test_onnx_file_v1_PerTensor_quant_random_1.onnx).
However, when I attempt to analyze the quantized model using the following command:
stedgeai.exe analyze --target stm32n6 --name network -m C:/Users/.../Desktop/test_onnx_models/test_onnx_v1/test_onnx_file_v1_PerTensor_quant_random_1.onnx --classifier --st-neural-art n6-allmems-O3@C:/ST/STEdgeAI/2.0/scripts/N6_scripts/user_neuralart.json --workspace C:/Users/.../Desktop/test_onnx_models/test_onnx_v1/analyze/workspace --output C:/Users/.../Desktop/test_onnx_models/test_onnx_v1/analyze/network_output
I receive the following error:
conv node=Gemm_10_conv_16 Error kernel 0 dequantized value index=30 not an int8 = 138 tval= scale=0.00352175138 assertion "0" failed: file "/c/local/jenkins_cloud/.../scale_offset_conv.cc", line 484 Internal compiler error (signo=6), please report it
The tool version is: ST Edge AI Core v2.0.0-20049
From the message, it seems that a dequantized value isn't properly interpreted as int8, triggering a failed assertion in scale_offset_conv.cc.
Any insight or suggestions would be greatly appreciated. Let me know if I can provide more logs or files. I include the .onnx file.