TOOL ERROR: operands could not be broadcast together with shapes (32,7200) (32,)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2022-03-16 8:50 AM
I have a convolutional model with convolutional layers bach normalization layers and Dense layers at the end. The model is converted to a tflite model. The inferencing works perfect on computer using tflite but When I try to deploy it on the nucleo h743zi2 I get this error.
The network layers ans its shape look like it is shown in the pic. Has anyone come across this problem?
As far my understanding goes, I did not do wrong model creation. It is some bad interpretation from STM Cube library.
Additional Info: I am using STM Cube AI version 7.1.0
Thanks in advance
Rick
Solved! Go to Solution.
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2022-03-17 10:46 AM
The problem comes from the optimization to fold the batch normalization.
With the undocumented option "--optimize.fold_batchnorm False" the model is analyzed correctly.
You can pass the option directly to the stm32ai command line or if you are using X-Cube-AI inside STM32CubeMX you can add this option in the first screen of the advanced parameter window
Regards
Daniel
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2022-03-16 9:08 AM
Can you share the model so I can reproduce and have the development team fix the problem ?
Thanks in advance
Regards
Daniel
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2022-03-16 9:20 AM
Hello @fauvarque.daniel,
Thanks for quicl reply. Should I share you the keras .h5 file ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2022-03-16 9:22 AM
yes please
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2022-03-16 9:33 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2022-03-16 9:37 AM
If I may, if you could provide also the quantized tflite so I have exactly the file you are using.
Daniel
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2022-03-16 9:51 AM
I've reproduced the problem with the h5, I let you know if there is a workaround
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2022-03-16 2:42 PM
Ok Thankyou @fauvarque.daniel
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2022-03-17 10:46 AM
The problem comes from the optimization to fold the batch normalization.
With the undocumented option "--optimize.fold_batchnorm False" the model is analyzed correctly.
You can pass the option directly to the stm32ai command line or if you are using X-Cube-AI inside STM32CubeMX you can add this option in the first screen of the advanced parameter window
Regards
Daniel
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2022-03-18 2:30 AM
Thanks a lot @fauvarque.daniel . The solution works. :)
Well, I am curious. Can you give little bit more insight about it. What do you mean by folding the bachnorm ?
Thanks
Rick
