2022-03-16 08:50 AM
I have a convolutional model with convolutional layers bach normalization layers and Dense layers at the end. The model is converted to a tflite model. The inferencing works perfect on computer using tflite but When I try to deploy it on the nucleo h743zi2 I get this error.
The network layers ans its shape look like it is shown in the pic. Has anyone come across this problem?
As far my understanding goes, I did not do wrong model creation. It is some bad interpretation from STM Cube library.
Additional Info: I am using STM Cube AI version 7.1.0
Thanks in advance
Rick
Solved! Go to Solution.
2022-03-18 09:23 AM
During code generation there is an optimization phase that can merge some layers, a typical case is a Conv2D followed by a batchNormalization followed by a ReLU, The optimized graph will just have a Conv2D
For example for this part of the model
After the optimizer the model will look like
Regards
Daniel
2022-03-21 03:45 AM
Thanks. I see. Its interesting inisight.
2023-11-01 05:47 AM
Is this still the correct syntax? I am using Cube AI 8.1.0 and when I add: --optimize.fold_batchnorm False" I get an unrecognized argument error.
2023-11-01 06:10 AM - edited 2023-11-01 06:15 AM
A little more information. The error itself only occurs when I attempt to quantize the model into 8 bit data types by adding these lines when creating the TF Lite model:
2023-12-27 06:39 AM
Hello DanF,
I have a similar problem, did you manage to solve it?
Thank you in advance,
Ioan