cancel
Showing results for 
Search instead for 
Did you mean: 

Validation Error: Unable to generate the "libai_network" library

Jinchen
Associate II

 

Hello,

I am currently working on validating a machine learning model that I have deployed on an STM32 microcontroller. While the model successfully runs on the STM32, the results it produces differ significantly from those obtained when running the model on a computer.

Additionally, when I attempt to perform validation on a desktop environment, I encounter the following error: 

E103(CliRuntimeError): Unable to generate the "libai_network" library.

Since the website could not accept the .tflite file, I have attached the picture of the structure of the model. The model was trained using TensorFlow 2.17 and utilizes approximately 75KB of flash memory and 15KB of RAM.

I would greatly appreciate any guidance or assistance in resolving these issues.

Thank you!

1 ACCEPTED SOLUTION

Accepted Solutions

While we perform validation, we can see this error instead of the one you reported earlier:

 

 

TOOL ERROR: tensorflow/lite/kernels/transpose_conv.cc:299 weights->type != input->type (INT8 != FLOAT32)Node number 17 (TRANSPOSE_CONV) failed to prepare.Failed to apply the default TensorFlow Lite delegate indexed at 0.

 

 Based on the model you shared, we can confirm it has float32 inputs and outputs, but we also expect same type for weights, which is not the case.

Layer 17 "TransposeConv" has quantized int8 weights (name: tfl.pseudo_qconst1) 

 

I would suggest to modify it in order to have float32 weights for this layer. Also, verify if you don't have another layer(s) with int8 weights.

 

Best regards,

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

10 REPLIES 10
fauvarque.daniel
ST Employee

You can attach a zip with the model in it, that will help us understand the root cause.


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
hamitiya
ST Employee

Hello,

In order to provide us your .tflite model, could you please change its extension with an extension allowed such as .7z ? Or compress it in a .7z file ?

 

Best regards,

Yanis

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Thanks so much for the reply.

Please find the model in the attachment

 

While we perform validation, we can see this error instead of the one you reported earlier:

 

 

TOOL ERROR: tensorflow/lite/kernels/transpose_conv.cc:299 weights->type != input->type (INT8 != FLOAT32)Node number 17 (TRANSPOSE_CONV) failed to prepare.Failed to apply the default TensorFlow Lite delegate indexed at 0.

 

 Based on the model you shared, we can confirm it has float32 inputs and outputs, but we also expect same type for weights, which is not the case.

Layer 17 "TransposeConv" has quantized int8 weights (name: tfl.pseudo_qconst1) 

 

I would suggest to modify it in order to have float32 weights for this layer. Also, verify if you don't have another layer(s) with int8 weights.

 

Best regards,

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Thanks so much for the reply!

It is probably because I optimized the model, I have attached another unoptimized model in the attachment (newmodel0930_unoptimized.tflite). It can be successfully analyzed with TFLite Micro runtime. However, if I used TFLite Micro runtime to analyzed and generate the code, the project will delete the Middlewares. And the Validate on desktop option is no longer available.

That's why I have been using STM32Cube.AI MCU runtime instead of TFLite Micro runtime. But when using STM32Cube.AI MCU runtime, the validate on desktop would fail and the performance of the model has been reduced.

Could you please take a look at the unoptimized model for me.

Sincerely,

Jinchen.

There are some glitches when using STM32CubeMX integrated in STM32CubeIDE when generating again the code. And with CubeMX integrated in CubeIDE the first thing CubeIDE triggers is a new generation of code. So you are always in project update mode and not project creation mode. 

I recommend to use STM32CubeMX outside STM32CubeIDE and generate a new STM32CubeIDE project.

We have implemented a workaround for this issue in the 9.1 release.

As a side note, you can use our developer cloud benchmark service to benchmark your model on different targets
https://stm32ai-cs.st.com/

Regards

Daniel


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello,

To complete Daniel's answer and to give an answer to one of your point:

"And the Validate on desktop option is no longer available.": Indeed, validation on desktop is only available with STM32Cube.AI MCU Runtime, not with TFLite Micro Runtime.

 

On my side, I cannot reproduce your issue with  X-CUBE-AI 9.1.0, I attached a project template to this message. Feel free to reproduce it in ST Edge AI Developer Cloud 

 

Best regards,

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Thanks so much for providing the  ST Edge AI Developer Cloud .

I used it to generate the STM32MX ioc, then generate code. It solved the Middleware missing problem, but when I run the code, it stopped at running the network.

Specifically, it stopped at

batch = ai_mnetwork_run(net_exec_ctx[idx].handle, ai_input, ai_output);

from aiSystemPerformance.c

Jinchen_0-1728222151908.png

 

Thanks for your answer, I will try on my side  and give you a response as soon as I can.

For your information here is the inference time for NUCLEO-F401RE we have reproduced on ST Edge AI Dev Cloud.

hamitiya_0-1728285250589.png

 

Best regards,

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.