cancel
Showing results for 
Search instead for 
Did you mean: 

TOOL ERROR: operands could not be broadcast together with shapes (32,7200) (32,)

rickforescue
Associate III

0693W00000Kcsu9QAB.pngI have a convolutional model with convolutional layers bach normalization layers and Dense layers at the end. The model is converted to a tflite model. The inferencing works perfect on computer using tflite but When I try to deploy it on the nucleo h743zi2 I get this error.

The network layers ans its shape look like it is shown in the pic. Has anyone come across this problem?

As far my understanding goes, I did not do wrong model creation. It is some bad interpretation from STM Cube library.

Additional Info: I am using STM Cube AI version 7.1.0

Thanks in advance

Rick

14 REPLIES 14
fauvarque.daniel
ST Employee

During code generation there is an optimization phase that can merge some layers, a typical case is a Conv2D followed by a batchNormalization followed by a ReLU, The optimized graph will just have a Conv2D

For example for this part of the model

0693W00000KdCYTQA3.jpgAfter the optimizer the model will look like

0693W00000KdCZaQAN.jpg 

Regards

Daniel


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
rickforescue
Associate III

Thanks. I see. Its interesting inisight.

Is this still the correct syntax? I am using Cube AI 8.1.0 and when I add: --optimize.fold_batchnorm False" I get an unrecognized argument error. 

DanF
Associate II

A little more information. The error itself only occurs when I attempt to quantize the model into 8 bit data types by adding these lines when creating the TF Lite model:

 

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8 # or tf.uint8
converter.inference_output_type = tf.int8 # or tf.uint8
 
Without those lines there is no TOOL ERROR at all.
 
Update: It's only this line causing the problem:
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
I suspect that STM code is using some other ops and thus this fails. 

Hello DanF,

I have a similar problem, did you manage to solve it?

Thank you in advance, 
Ioan