2019-11-23 05:09 PM
I am using CubeAI to load a model from Target Selection. However this model was trained using google Colab. So when I load it and Analyze it I got the error: Unknown Initializer GlorotUniform .
Searching through the web the solution provided is not when building the model but when losing the model but since I don't know how to modify how CubeAI loads the model I am asking for some help here
2019-11-25 01:02 AM
Hi,
What is the version of the X-CUBE-AI which is used?
"GlorotUniform" indicates that tf.keras has been used to generate/train the model.
With X-CUBE-AI 4.1, there is a detection to know if the h5 file ("Keras" model) has been create with tf.keras or keras.
If you use X-CUBE-AI 4.1, can you try with the command line tools to import the h5 file:
> set TF_KERAS=1
> stm32ai analyze -m <tf-keras-model-path>
br,
Jean-Michel
2020-06-04 04:13 AM
Hi Jean-Michel,
I faced the same issue as DAlia.1551 described. I build and trained my model in Google Colab using Tensorflow version 2.2.0 which embeds Keras version 2.3.0.
Using the command you provided (thanks for that btw) gave me the insight that X-CUBE-AI (I have version 5.0.0) only supports Keras 2.2.4-tf, as described in this error message:
Neural Network Tools for STM32 v1.2.0 (AI tools v5.0.0)
-- Importing model
Using TensorFlow backend.
INVALID MODEL: Couldn't load Keras model /Users/David/Downloads/model.h5, error: Model saved with Keras 2.3.0-tf but <= 2.2.4-tf is supported
My question now if you have the intention to in the near future support these newer versions of Tensorflow (and Keras)? When can this be expected?
If the expected support from your side is far away (months away) what work-around do you suggest for me? Rebuild the whole model using an earlier version of Keras?
Thanks in advance!
Best Regards,
David
2020-06-04 05:55 AM
Hi David,
This issue has been identified and we work on it for the future release (date not yet committed). I will complete my answer when a date is committed.
converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file(<keras_model_path>)
tfl_model = converter.convert()
open(tfl_file_path, "wb").write(tfl_model)
%tensorflow_version 1.x
https://colab.research.google.com/notebooks/tensorflow_version.ipynb#scrollTo=aR_btJrKGdw7
br,
Jean-Michel
2020-06-04 07:11 AM
Hi Jean-Michel,
Many thanks for your prompt reply, very impressive!
To use a TF Lite format was actually my first approach, but there I faced an even more ambiguous error. simply "INTERNAL ERROR".
I tried it again now using
tf.compat.v1.lite.TFLiteConverter.from_keras_model_file()
as you are suggesting instead of
tf.lite.TFLiteConverter.from_keras_model()
which I used before. But I still get "INTERNAL ERROR" in CubeMX AI plugin.
Using the cli version of stm32ai as you suggested does not lead to a lot of extra information, as you can see here:
David@Davids-MacBook-Air-3:~/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/5.0.0/Utilities/mac$ ./stm32ai analyze -m ~/Downloads/dnn_multivar_multistep_classifier_targetsteps_1_L1_8_L2_8_sigmoid_model_nepochs_10_nstepsperepoch_100_date_20-06-04.h5.tflite
Neural Network Tools for STM32 v1.2.0 (AI tools v5.0.0)
-- Importing model
INTERNAL ERROR:
Creating report file /Users/David/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/5.0.0/Utilities/mac/stm32ai_output/network_analyze_report.txt
INTERNAL ERROR:
and the crash report file does not contain any information that is useful (at least not to my eye), but perhaps you can make something of it:
David@Davids-MacBook-Air-3:~/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/5.0.0/Utilities/mac$ cat /Users/David/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/5.0.0/Utilities/mac/stm32ai_output/network_analyze_report.txt
Neural Network Tools for STM32 v1.2.0 (AI tools v5.0.0)
Created date : 2020-06-04 16:02:10
Exec/report summary (analyze err=-1)
------------------------------------------------------------------------------------------------------------------------
error : INTERNAL ERROR:
model file : /Users/David/Downloads/dnn_multivar_multistep_classifier_targetsteps_1_L1_8_L2_8_sigmoid_model_nepochs_10_nstepsperepoch_100_date_20-06-04.h5.tflite
type : tflite (tflite)
c_name : network
compression : None
quantize : None
L2r error : NOT EVALUATED
workspace dir : /Users/David/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/5.0.0/Utilities/mac/stm32ai_ws
output dir : /Users/David/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/5.0.0/Utilities/mac/stm32ai_output
INTERNAL ERROR:
Evaluation report (summary)
--------------------------------------------------
NOT EVALUATED
When you mention that I could change to TF version 1.x I would really like to avoid that since I have built up my entire architecture around TF2.0 and I would really not like to go back to 1.x with it's non eager executions and such, I hope you understand.
Do you think it would work if I rebuilt it using the "pure keras" library (i.e. using "import keras" instead of "import tf.keras"? To be honest the way Keras and TF versions work now after the Keras integration is a bit confusing to me...
Let me know what you think.
Thanks in advance!
Best Regards,
David
2020-09-22 10:52 AM
Hi,
I've faced the same issue as DAlia.1551 and David. I've developed a neural network model using TF 2.x, and then tried importing the .h5 file to my STM32. Suggested solutions to this problem have not work in case of my recurrent neural network, as I couldn't afford downgrading to TF version 1.x, nor did the conversion of the model to TFlite work because of the presence of cells such as LSTM.
What worked however, was changing one line of code that directly affects how model is exported:
import keras # Correct
from tensorflow import keras # Will cause the Unknown Initializer GlorotUniform
model.save("model_name.h5")
Let me know if it helped anyone.
Best Regards,
Filip