2019-07-24 08:01 AM
Hello,
Is there a way to give X-CUBE-AI a model with quantized-weights? Right now the tool is able to do a custom post-training quantization for a given Keras model. However, what if we have a model that already has quantized weights (which is the case when we do a quantization-aware training with Tensorflow Lite for example)? Is there any solution/workaround to make use of "quantization-aware weights"?
Thank you and kind regards
2019-07-24 08:17 AM
Not in the X-Cube-AI 4.0.0 release, we only support TFLite in floating point.
The next release in mid Q4 will add the support of quantized TFLite networks.
Regards
Daniel