cancel
Showing results for 
Search instead for 
Did you mean: 

TOOL ERROR: operands could not be broadcast together with shapes (32,7200) (32,)

rickforescue
Associate III

0693W00000Kcsu9QAB.pngI have a convolutional model with convolutional layers bach normalization layers and Dense layers at the end. The model is converted to a tflite model. The inferencing works perfect on computer using tflite but When I try to deploy it on the nucleo h743zi2 I get this error.

The network layers ans its shape look like it is shown in the pic. Has anyone come across this problem?

As far my understanding goes, I did not do wrong model creation. It is some bad interpretation from STM Cube library.

Additional Info: I am using STM Cube AI version 7.1.0

Thanks in advance

Rick

1 ACCEPTED SOLUTION

Accepted Solutions
fauvarque.daniel
ST Employee

The problem comes from the optimization to fold the batch normalization.

With the undocumented option "--optimize.fold_batchnorm False" the model is analyzed correctly.

You can pass the option directly to the stm32ai command line or if you are using X-Cube-AI inside STM32CubeMX you can add this option in the first screen of the advanced parameter window

Regards

Daniel


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

14 REPLIES 14
fauvarque.daniel
ST Employee

Can you share the model so I can reproduce and have the development team fix the problem ?

Thanks in advance

Regards

Daniel


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
rickforescue
Associate III

Hello @fauvarque.daniel​,

Thanks for quicl reply. Should I share you the keras .h5 file ?

yes please


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
rickforescue
Associate III

Here is the .h5 keras file in zipped format. The model is relatively big to fit in internal flash. I quantized it using tflite Let me know if you have some other questions

Thanks

Rick

fauvarque.daniel
ST Employee

If I may, if you could provide also the quantized tflite so I have exactly the file you are using.

Daniel


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
fauvarque.daniel
ST Employee

I've reproduced the problem with the h5, I let you know if there is a workaround


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Ok Thankyou @fauvarque.daniel​ 

fauvarque.daniel
ST Employee

The problem comes from the optimization to fold the batch normalization.

With the undocumented option "--optimize.fold_batchnorm False" the model is analyzed correctly.

You can pass the option directly to the stm32ai command line or if you are using X-Cube-AI inside STM32CubeMX you can add this option in the first screen of the advanced parameter window

Regards

Daniel


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
rickforescue
Associate III

Thanks a lot @fauvarque.daniel​ . The solution works. 🙂

Well, I am curious. Can you give little bit more insight about it. What do you mean by folding the bachnorm ?

Thanks

Rick