cancel
Showing results for 
Search instead for 
Did you mean: 

Error in post-training quantization with ST Edge AI Developer Cloud

silviag
Associate II

Hello everyone,
I am learning how to use the ST Edge AI Developer Cloud tool and, after downloading my model that I developed with TensorFlow and selecting the STM32 MCUs platform, I wanted to try the post-training quantization but I encountered the following error:

Executing with: {'model': '/tmp/quantization-service/970d7e77-a0e3-4d23-a8ec-5e7e52d9a260/1dcnnrnn_01.h5', 'data': '/tmp/quantization-service/970d7e77-a0e3-4d23-a8ec-5e7e52d9a260/dataset4quantization.npz', 'input_type': tf.float32, 'output_type': tf.float32, 'optimization': <Optimize.DEFAULT: 'DEFAULT'>, 'output': '/tmp/quantization-service/970d7e77-a0e3-4d23-a8ec-5e7e52d9a260', 'disable_per_channel': False} Error when deserializing class 'InputLayer' using config={'batch_shape': [None, 9999, 1], 'dtype': 'float32', 'sparse': False, 'name': 'input_layer_4'}. Exception encountered: Unrecognized keyword arguments: ['batch_shape']

The error occurs regardless of weather I provide a data set for quantization or not.

Can anyone help me understanding what I have to change? Thank you in advance

1 ACCEPTED SOLUTION

Accepted Solutions

Hello,

It means you are using operators that are not supported by the quantizer. Actually we only support 

tf.lite.OpsSet.TFLITE_BUILTINS_INT8.
If you do locally a quantization adding tf.lite.OpsSet.SELECT_TF_OPS, it won't be supported by ST Edge AI Core.
 
For your information:
 
 
Best regards,
Yanis

In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

5 REPLIES 5
hamitiya
ST Employee

Hello,

Could you please tell me what is the version of Tensorflow used to export your model ?

I would first suggest to try using version 2.15 since it it the one used for quantization.

 

Best regards,

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello Yanis,


I used the version 2.17 to train and export the model. I downgraded to the 2.15 version to try to export again the model and load it in ST Edge AI Developer Cloud, but during the loading of the model in python I obtained the same error that I was encountering in the ST Edge AI Developer Cloud: 

Error when deserializing class 'InputLayer' using config={'batch_shape': [None, 9999, 1], 'dtype': 'float32', 'sparse': False, 'name': 'input_layer_4'}. Exception encountered: Unrecognized keyword arguments: ['batch_shape']).

Do you have any suggestions? Should I retrain the network using the Tensorflow 2.15? 

 

Best regards,
Silvia

Hello Silvia,

Yes, I would first suggest to retrain the network using TF 2.15, and see if the same issue occurs

 

Best regards,

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
silviag
Associate II

Thanks @hamitiya and @Destini
I have retrained the model using the 2.15 version and, when I launched the quantization, that error didn't occur again. 
But now I am encountering the following error, linked to some troubles that the converter has with some operations (TensorListReserve and StatefulPartitionedCall). How can I solve this issue? Is it possible to Select TF ops and disabling `_experimental_lower_tensor_list_ops` flag in the TFLite converter object - as suggested in the error window - directly using the ST Edge Ai Developer Cloud or do I have to implement the conversion to TFLite directly by myself in my python script?


Executing with: {'model': '/tmp/quantization-service/603561c5-0a56-4eee-b3dc-a1ee6ad23869/1dcnnrnn_00.h5', 'data': None, 'input_type': tf.float32, 'output_type': tf.float32, 'optimization': <Optimize.DEFAULT: 'DEFAULT'>, 'output': '/tmp/quantization-service/603561c5-0a56-4eee-b3dc-a1ee6ad23869', 'disable_per_channel': False} No data specified, enabling fake quantization Converting original model to TFLite... /app/quantizer/cli.py:53:1: error: 'tf.TensorListReserve' op requires element_shape to be static during TF Lite transformation pass res = quantize_from_local_file( ^ <unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from /app/quantizer/cli.py:53:1: error: failed to legalize operation 'tf.TensorListReserve' that was explicitly marked illegal res = quantize_from_local_file( ^ <unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from <unknown>:0: error: Lowering tensor list ops is failed. Please consider using Select TF ops and disabling `_experimental_lower_tensor_list_ops` flag in the TFLite converter object. For example, converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]\n converter._experimental_lower_tensor_list_ops = False

Thanks for the help!

Best,
Silvia

Hello,

It means you are using operators that are not supported by the quantizer. Actually we only support 

tf.lite.OpsSet.TFLITE_BUILTINS_INT8.
If you do locally a quantization adding tf.lite.OpsSet.SELECT_TF_OPS, it won't be supported by ST Edge AI Core.
 
For your information:
 
 
Best regards,
Yanis

In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.