Does cube.ai support quantized onnx models?
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-02-01 4:26 AM
I used a quantization tool called ppq to quantize my onnx model because the activation values in the middle layer of this model were too large and I needed to quantize it from float32 type to int8 type. By the way, this model was defined and trained by me using pytorch. I used STM32CUBEMX and CUBEAI version 8.1.0 to try to analyze the model and the progress bar would get stuck referring to the fact that it loaded from 0 percent to 100 percent and then went back to 0 percent, and if I didn't interrupt it he would keep going. Can anyone help me?
Labels:
- Labels:
-
STM32 ML & AI
-
STM32CubeAI
-
STM32CubeMX
0 REPLIES 0
