cancel
Showing results for 
Search instead for 
Did you mean: 

Quantized models throw error in Cube AI

MGior.1
Associate

Hi everyone,

I am trying to implement an image recognition application on an STM32L475. I succeded implementing a Keras model with Cube AI and wanted to reduce the memory footprint with a quantization model. I used the following code to quantize the keras model according to a question I found on this forum.

# Model_s is a trained model in the same python script
converter = tf.lite.TFLiteConverter.from_keras_model(model_s)
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
quant_model = converter.convert()
# Saving the model
open("Models/converted_model.tflite", "wb").write(quant_model)

When I try to analyze the model Cube AI reports the following error:

Analyzing model 
Neural Network Tools for STM32 v1.2.0 (AI tools v5.0.0) 
-- Importing model 
INTERNAL ERROR:  
 
Creating report file C:\Users\<user_name>\STM32Cube\Repository\Packs\STMicroelectronics\X-CUBE-AI\5.0.0\Utilities\windows\stm32ai_output\network_analyze_report.txt 
 
INTERNAL ERROR: 

And on the report file:

Neural Network Tools for STM32 v1.2.0 (AI tools v5.0.0)
Created date       : 2020-05-12 23:00:07
 
Exec/report summary (analyze err=-1)
------------------------------------------------------------------------------------------------------------------------
error              : INTERNAL ERROR: 
model file         : C:\...\Models\converted_model.tflite
type               : tflite (tflite)
c_name             : network
compression        : None
quantize           : None
L2r error          : NOT EVALUATED
workspace dir      : C:\Users\<user_name>\AppData\Local\Temp\mxAI_workspace1060342814162004810839823904558311
output dir         : C:\Users\<user_name>\STM32Cube\Repository\Packs\STMicroelectronics\X-CUBE-AI\5.0.0\Utilities\windows\stm32ai_output
 
 
INTERNAL ERROR: 
 
Evaluation report (summary)
--------------------------------------------------
NOT EVALUATED

I can provide more python-side code if needed. Is this a known error?

Thanks

2 REPLIES 2
fauvarque.daniel
ST Employee

What is the tensorflow version you are using ? We have seen that post training quantization was not fully supported yet in Tensorflow 2.x.

Can you run the stm32ai command line setting the _DEBUG=1

The stm32ai command line is located under: C:\Users\<user_name>\STM32Cube\Repository\Pack\STMicroelectronics\X-CUBE-AI\5.0.0\Utilities\windows\stm32ai.exe

You can launch the following command (shell like)

_DEBUG=1 stm32ai analyze -m <path_to_model>

It will hopefully give more information why it failed.

If you are willing to share the model with us, don't hesitate to send it to me directly by mail (daniel.fauvarque at st.com)

Regards

Daniel


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
MGior.1
Associate

Thanks for replying Daniel.

I am using Tensorflow 2, I will try to train the same model this afternoon using Tensorflow 1 and report back.

I have tryed to run your code but my CMD does not find DEBUG ('DEBUG' is not recognized as an internal or external command, operable program or batch file.).

I am sending you my model by email.

Regards,

Marco