2023-10-31 07:36 AM
I have a LSTM model which was quantized using TensorFlow dynamic, full and float16 quantization. After that, I wanted to benchmark these three quantized models on the STM32AI model zoo. But after importing these models into the web UI and pressing "Start", I get the following error message:
>>> stm32ai analyze --model dummy_model_0_stateful_int.tflite --allocate-inputs --allocate-outputs --compression none --optimization balanced --series stm32f4 --target stm32f4 --name network --workspace workspace --output output Neural Network Tools for STM32 family v1.7.0 (stm.ai v8.1.0-19520) NOT IMPLEMENTED: Unsupported layer types: READ_VARIABLE, VAR_HANDLE, ASSIGN_VARIABLE, CALL_ONCE, WHILE, stopping. |
I would be really grateful for ideas or insides on the problem :)
Solved! Go to Solution.
2023-10-31 08:37 AM
We support full int8 quantization (signed symmetric weights, and signed asymmetric activations, per channel scheme)
For LSTM, at this point we have a Float32 implementation, but we are analyzing the need for int8 support.
At this point we have not proved that int8 LSTM quantization will maintain the accuracy
Regards
2023-10-31 07:54 AM
STM32Cube.AI doesn't support float16. You can benchmark your LSTM model in float32.
Best Regards
2023-10-31 08:05 AM
Thank you for the fast response. That makes sense, but shouldn't it support uint8? At least the full integer quantized model should work, right? Because it is also possible to do post quantization and benchmarking in the UI.
Thanks in advance
2023-10-31 08:37 AM
We support full int8 quantization (signed symmetric weights, and signed asymmetric activations, per channel scheme)
For LSTM, at this point we have a Float32 implementation, but we are analyzing the need for int8 support.
At this point we have not proved that int8 LSTM quantization will maintain the accuracy
Regards
2023-11-02 05:40 AM
Thank you so much for the answer that helps me. One follow-up question that came up, is if this is a limitation specific to the Cloud AI or does it also apply to CubeMX or X-Cube-AI? If I were to run a quantized LSTM model locally on an eval board, would this work?
2023-11-06 12:16 AM
Hello @JanK
this limitation is not specific to STM32Cube.AI Developer Cloud, but on the tool itself. STM32Cube.AI Developer Cloud uses the same tool as the one provided in X-CUBE-AI for STM32CubeMX.
Best regards,
Yanis
2023-11-06 12:23 AM
2024-03-04 08:59 AM
I am trying to analyze my model (1D-CNN) using the Developper cloud and I am facing this error ">>> stm32ai analyze --model CNN_fc_1_quant_int8_int8_random_1.tflite --compression none --optimization balanced --series stm32f4 --target stm32f4 --name network --workspace workspace --output output Neural Network Tools for STM32 family v1.7.0 (stm.ai v8.1.0-19520) TOOL ERROR: operands could not be broadcast together with shapes (4,) (4,1,3,9)". I am not able to understand why, i kindly request your assitance.