cancel
Showing results for 
Search instead for 
Did you mean: 

Warning E010(InvalidModelError) when analyzing a network *.h5 with X-CUBE-AI 7.0.0

MSant.11
Associate III

I've tried to analyze a network *.h5 with X-CUBE-AI 7.0.0, but I get this warning:

E010(InvalidModelError): Model saved with Keras 2.6.0 but <= 2.5.0 is supported

Did you fix it ?

The analysis of the corresponding TFLite network gives just a warning

WARNING: nl_7 (SOFTMAX) in SIGNED not supported. Falling back to float

Thanks

M

3 REPLIES 3
fauvarque.daniel
ST Employee

X-Cube-AI 7.0.0 has been built with Tensorflow 2.5, it was the latest version when we shipped the product.

We plan to move to the latest version of tensorflow (2.6 ?) for our next release by the end of 2021

Regards

Daniel


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
Laurent
ST Employee

you need to use X-CUBE-AI 4.1.0. which was the version at the time . you can download and install older version of X-CUBE-AI in CubeMx

jean-michel.d
ST Employee

Hello MSant,

X-CUBE-AI 7.0.0 is based on Tensorflow 2.5 (including Keras 2.5). The associated Python module is used to load the provided h5 model during the import phase. This is why, you have the message: Model saved with Keras 2.6.0 but <= 2.5.0 is supported.

Normally in the next release, this will be fixed. It should be based on the last stable version of Tensorflow/Keras.

Convert the h5 model in TFLite is good way in particular if you want at the end, a quantized version.

For the warning, I suppose that during the conversion Keras to TFLite, we have also quantized the model. This warning indicates that the last (softmax) layer is converted in float by the code generator because for precision reasons, int8 c-implementation of the softmax layer is not provided. If you look the generated code, dequantize and quantize layers are added before and after the softmax layer in float.

br,

jm