cancel
Showing results for 
Search instead for 
Did you mean: 

X-CUBE-AI, tflite model: INTERNAL ERROR: Transpose of batch not supported

MAles.1
Associate II

Using X-CUBE-AI 6.0.0, trying to import a tflite model of a RNN network, with both dense and LSTM layers, this error is returned. I can't find any information on that. Is there something that can be done?

Thank you very much

6 REPLIES 6
MAles.1
Associate II

As added information, this error is reported during the "Analyze" procedure. Using a full Keras model there's no such error, apart from the model being too big for the available RAM.

fauvarque.daniel
ST Employee

Transpose of batch is not supported since after that layer likely the batch size is no more 1, and we only support batch size of 1.

This transpose layer is added by the tflite converter, are you using version tensorflow 2.3.1

Are you trying to convert to tflite to use the tflite post training quantization ?

Can you share your model so we could look at the issue.

Regards

Daniel


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
MAles.1
Associate II

Hi, I'm using tf 2.4.1, and converting a keras model to tflite. The problem happens with or without quantization.

I'm attaching an example model.

Thank you very much

https://drive.google.com/file/d/1dvC4nHtnFZNU4Ka5ETkwRb6BwUKeoYbp/view?usp=sharing

IHema.1
Associate

Hi,

I have exactly the same problem. My neural network is made up of (LSTM + dense) learnt with tensorflow 2.4.1. The input of the network is a vector of 200 consecutive data and the output is a vector of 200 x 2 probabilities for each data to belong to our 2 categories. However, when converting the tensorflow lite file to a C file with STM32CubeMx, I also get the "Transpose of Batch not supported" error message.

Can you help me to solve my problem ?

I'm the original poster, in the meanwhile we solved by not using tf lite. We reduced the original model by downsampling the input data, and so reducing the input vector size. This reduced the RAM occupation and so we can use the original keras model.

ARaja.2
Associate

Still facing this issue. The workaround as suggested by @MAles.1​  works. But the Analyze option still fails for a TFlite model. Cube AI Version 7.1.0. Tensor Flow version 2.5.

Is there an update / fix for this?