cancel
Showing results for 
Search instead for 
Did you mean: 

stedgeai INTERNAL ERROR: Order of dimensions of input cannot be interpreted

sandeepchawla74
Associate II

Hello , 

I am trying to use a text detection model on STM32N6 . This is a CNN model

The command used is : 

stedgeai generate --model custom_model.onnx --target stm32n6 --st-neural-art default@user_neuralart_STM32N6570-DK.json --input-data-type float32

The error I get is : INTERNAL ERROR: Order of dimensions of input cannot be interpreted

The model is using the following operators :

Conv

Relu

BatchNormalization

MaxPool

Concat

Resize

It was using Transpose operator  , but I removed it from the model

Could you provide the possible reasons behind the error.

Regards,

Sandeep

6 REPLIES 6
Julian E.
ST Employee

Hello @sandeepchawla74,

 

I cannot say without the model.

Could you share it in a zip file?

 

Also, please try to use the option --use-onnx-simplifier, sometimes it helps.

Make sure to use the latest version of the st edge ai core (2.2)

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello Julian , 

Please find the model attached. 

The version is 

ST Edge AI Core v2.2.0-20266 2adc00962.

Regards,

Sandeep

Hello Julian ,

Is there any update on the issue reported. I cannot progress till the issue is resolved.

Regards,

Sandeep

Hello @sandeepchawla74,

 

Sorry for the late answer.

 

We found that your issue is a bug due to a shape computed wrongly.

In your model, seemingly starting at the conv2d_12, the output that should be of shape (1x512x8x8) is seen by the st edge ai core as (1x512x18x18) which cause the  concat below to try concatenating a shape (1x512x18x18 with 1x512x8x8 (right arrow)) which does not pass.

JulianE_0-1757494402783.png

 

The issue seems triggered by the dilatation and the lack of an attribute "kernel_shape" not specified which is legal in ONNX.

 

By editing the model and adding the "kernel_shape" to every conv2d, it solves the issue.
It also save the model using ONNX ir version 9 as the version 10 is not supported by the core.

 

Here is the code:

import onnx
from onnx import helper, numpy_helper, checker

# Load your model
model = onnx.load("modified_model.onnx")
graph = model.graph

for node in graph.node:
    if node.op_type == "Conv":
        # Check if kernel_shape attribute exists
        if not any(attr.name == "kernel_shape" for attr in node.attribute):
            # Get the weight tensor (2nd input)
            weight_name = node.input[1]
            weight_tensor = next((init for init in graph.initializer if init.name == weight_name), None)
            
            if weight_tensor is not None:
                W = numpy_helper.to_array(weight_tensor)
                # W shape = (out_channels, in_channels/groups, kH, kW)
                kH, kW = W.shape[2], W.shape[3]

                # Add kernel_shape attribute
                kernel_shape_attr = helper.make_attribute("kernel_shape", [kH, kW])
                node.attribute.append(kernel_shape_attr)
                print(f"Added kernel_shape {[kH, kW]} to Conv node {node.name or '[unnamed]'}")
            else:
                print(f"Could not find weights for Conv node {node.name}, skipping.")

# Force IR version = 9
model.ir_version = 9

# (optional) Run checker to confirm validity
checker.check_model(model)

# Save fixed model
onnx.save(model, "model_fixed_ir9.onnx")

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello Julian ,

Thanks for the update.

I could compile using stedgeai but the size was much bigger then specifed (61MB) .

The start address is 0x70380000. Can I increase the size of this section ( to how much ?)

Further I tried to quantize the model to int8 to reduce the size ,it gave error

Unsupported layer types: ConvInteger, DynamicQuantizeLinear

Is there a plan to support these layers?

 

Regards,

Sandeep

 

 

 

Hello @sandeepchawla74,

 

I used the model obtained the script above in the dev cloud and successfully quantized the model for it to be around  22.295MB.

Could you try replicate it and see it if helps?

 

I attach the one I obtained on the dev cloud.

 

Concerning the layers not supported, it is hard to say when they will be.

I noted the need and will report it to the dev team.

 

Have a good day,

Julian

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.