cancel
Showing results for 
Search instead for 
Did you mean: 

STM32AI Model Zoo won't let me analyze quantized model - NOT IMPLEMENTED: Unsupported layer types:

JanK
Associate II

I have a LSTM model which was quantized using TensorFlow dynamic, full and float16 quantization. After that, I wanted to benchmark these three quantized models on the STM32AI model zoo. But after importing these models into the web UI and pressing "Start", I get the following error message:

 

>>> stm32ai analyze --model dummy_model_0_stateful_int.tflite --allocate-inputs --allocate-outputs --compression none --optimization balanced --series stm32f4 --target stm32f4 --name network --workspace workspace --output output Neural Network Tools for STM32 family v1.7.0 (stm.ai v8.1.0-19520) NOT IMPLEMENTED: Unsupported layer types: READ_VARIABLE, VAR_HANDLE, ASSIGN_VARIABLE, CALL_ONCE, WHILE, stopping.

 

I would be really grateful for ideas or insides on the problem 🙂

 

1 ACCEPTED SOLUTION

Accepted Solutions
fauvarque.daniel
ST Employee

We support full int8 quantization (signed symmetric weights, and signed asymmetric activations, per channel scheme)

For LSTM, at this point we have a Float32 implementation, but we are analyzing the need for int8 support.

At this point we have not proved that int8 LSTM quantization will maintain the accuracy

Regards

View solution in original post

7 REPLIES 7
fauvarque.daniel
ST Employee

STM32Cube.AI doesn't support float16. You can benchmark your LSTM model in float32.

Best Regards

Thank you for the fast response. That makes sense, but shouldn't it support uint8? At least the full integer quantized model should work, right? Because it is also possible to do post quantization and benchmarking in the UI. 

Thanks in advance

fauvarque.daniel
ST Employee

We support full int8 quantization (signed symmetric weights, and signed asymmetric activations, per channel scheme)

For LSTM, at this point we have a Float32 implementation, but we are analyzing the need for int8 support.

At this point we have not proved that int8 LSTM quantization will maintain the accuracy

Regards

Thank you so much for the answer that helps me. One follow-up question that came up, is if this is a limitation specific to the Cloud AI or does it also apply to CubeMX or X-Cube-AI? If I were to run a quantized LSTM model locally on an eval board, would this work?

 

hamitiya
ST Employee

Hello @JanK 

this limitation is not specific to STM32Cube.AI Developer Cloud, but on the tool itself. STM32Cube.AI Developer Cloud uses the same tool as the one provided in X-CUBE-AI for STM32CubeMX.

Best regards,

Yanis

Hello @hamitiya,

thank you so much for the response. This answers my question.

Best regards,

Jan

I am trying to analyze my model (1D-CNN) using the Developper cloud and I am facing this error ">>> stm32ai analyze --model CNN_fc_1_quant_int8_int8_random_1.tflite --compression none --optimization balanced --series stm32f4 --target stm32f4 --name network --workspace workspace --output output Neural Network Tools for STM32 family v1.7.0 (stm.ai v8.1.0-19520) TOOL ERROR: operands could not be broadcast together with shapes (4,) (4,1,3,9)". I am not able to understand why, i kindly request your assitance.