2023-10-12 10:15 AM
The STM32CubeMX AI tool has an option to compress a model. Is there any way to get compression to work on an quantized model? I'm guessing the answer is no because when I try with a .tflite or .onnx model it complains that only float or double is supported. If there is any way to get model compression to work with an integer model I would appreciate some tips on how that might be done.
Solved! Go to Solution.
2023-10-12 03:05 PM
I found my answer in another post. Apparently compression is only available on floating point dense layers.
2023-10-12 03:05 PM
I found my answer in another post. Apparently compression is only available on floating point dense layers.