cancel
Showing results for 
Search instead for 
Did you mean: 

Model compression with quantized model

marchold
Associate II

The STM32CubeMX AI tool has an option to compress a model. Is there any way to get compression to work on an quantized model? I'm guessing the answer is no because when I try with a .tflite or .onnx model it complains that only float or double is supported. If there is any way to get model compression to work with an integer model I would appreciate some tips on how that might be done.

1 ACCEPTED SOLUTION

Accepted Solutions
marchold
Associate II

I found my answer in another post. Apparently compression is only available on floating point dense layers.  

View solution in original post

1 REPLY 1
marchold
Associate II

I found my answer in another post. Apparently compression is only available on floating point dense layers.