2026-02-09 4:59 AM
Hi all,
I’m trying to compile an ONNX model that has float32 input and output. When I compile the model using ST Edge AI, the generated output ONNX shows both input and output as INT8 instead of float32. I wanted to understand:
- Is this behavior because the model is being compiled specifically as an ONNX model?
- Or is there a specific reason STEdgeAI is forcing INT8 input/output during compilation (e.g., quantization settings, optimization constraints, or tool limitations)? Any clarification on how STEdgeAI handles mixed-precision models and I/O types would be really helpful.
Thanks in advance!
Solved! Go to Solution.
2026-02-09 8:48 AM
Hi @Afreen,
You can select the input and output data type with the command --input-data-type and output-data-type.
I let you look at the detail here:
You will see that it is said that for Quantized QDQ ONNX model, the default type is changed to INT8.
Have a good day,
Julian
2026-02-09 8:48 AM
Hi @Afreen,
You can select the input and output data type with the command --input-data-type and output-data-type.
I let you look at the detail here:
You will see that it is said that for Quantized QDQ ONNX model, the default type is changed to INT8.
Have a good day,
Julian