cancel
Showing results for 
Search instead for 
Did you mean: 

STM32CubeMX ERROR when analyzing ONNX model (CNN)

STHenry
Associate

Dear Community,

I'm evaluating whether some of the STM32 ICs are suitable for future projects using simple CNNs (Convolutional Neural Networks).

I'm running into problems when trying to analyze a simple CNN model in the ONNX format (converted to ONNX from pyTorch) in the STM32CubeMX AI IDE. I'm using ST Edge AI Core v2.0.0-20049 and once I click on 'analyze' in the network tab, the IDE gives me the error 'INTERNAL ERROR: 'kernel_shape''. 

Please see the model attached below.

STHenry_0-1748899889265.png

I could not find any hint in the docs and as far as I know all the operations should be supported, shouldn't they?

I tried to experimenting with the layers and the kernel sizes but the error was not fixed.

The CONV-Layer is leading to the error.

I would be happy about a hint how to fix this error or if the platform does not support some operations or if there are some general issues known with CNNs.

I attached the ONNX model below.

Thank you!

4 REPLIES 4
Julian E.
ST Employee

Hello @STHenry,

 

In the documentation about onnx layer, it is written:

Conv

Performs convolution operation

  • category: convolutional layer
  • input data types: float32
  • output data types: float32

Specific constraints/recommendations:

  • arbitrary strides, provided that they are smaller than the input size
  • arbitrary filter kernel sizes, provided that they are smaller than the input size

 

In your case the error is kernel_shape, your input size is (1,1,32) and your kernel size is (2,1,1), it may be the cause of your issue.

 

FYI, the version 2.1 of the stedgeai is out and I would advise to always use the latest version as it includes all the fixes of previous version bugs. In this case, it still gives the issue.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Dear Julian,

 

thank you very much for your kind reply.

I updated the stedgeai to the latest version.

I also tried all possible kernel/input sizes, unfortunately all with the same outcome and error.

 

Here I have a minimalistic code example (pytorch) with only one CONV layer and also the code to convert from from pytorch to ONNX-format.

Are you able @Julian E.  (or someone else) to use CNNs build in pytorch and get the resulting ONNX running?

How would you adapt the following example to get a working CONV layer for the STM32s?

# !/usr/bin/env python3
import torch


class CNN(torch.nn.Module):
def __init__(self, n_features, n_filters, n_kernel):
super().__init__()
self.conv = torch.nn.Conv1d(in_channels=n_features, out_channels=n_filters,
kernel_size=(n_kernel,))

def forward(self, x):
x = self.conv(x)

return x


def convert_model(path_onnx):
torch_model = CNN(n_features=1, n_filters=1, n_kernel=1)
dummy_input = (torch.randn(1, 1, 32),)

onnx_program = torch.onnx.export(torch_model, dummy_input, dynamo=True)
onnx_program.optimize()
onnx_program.save(path_onnx)


if __name__ == '__main__':
save_path = r'C:\test\model.onnx'
convert_model(save_path)

The ONNX model output from this code is attached below.

Thank you very much!

 

SlothGrill
ST Employee

Hello,

Just as a side note, I tested building an onnx with my scripts and the seemingly same network (attached here) does not produce the bug.

Looks like on my side, the "export" scheme is not exactly the same (probably you don't want that ...):

torch.onnx.export(mdl, input, out_file)

could you please state if this works on your network ?

Cheers.

Hello @STHenry@SlothGrill,

 

Yes indeed, it seems that it is the way you export the model that cause the issue:

You're using:

onnx_program = torch.onnx.export(torch_model, dummy_input, dynamo=True)
onnx_program.optimize()
onnx_program.save(path_onnx)


Problems:


dynamo=True:

  • This uses PyTorch’s newer Dynamo/FX2ONNX path, which outputs an intermediate IR, not a standard .onnx file directly.
  • That format is not compatible with STM32Cube.AI / ST Edge AI Core, which expects a classic .onnx file following ONNX opset specs

 

Find more information at the beginning of this documentation: https://stedgeai-dc.st.com/assets/embedded-docs/supported_ops_onnx.html)

 

I am not sure what exactly needs to be set, but something like this works:

class CNN(torch.nn.Module):
    def __init__(self, n_features, n_filters, n_kernel):
        super().__init__()
        self.conv = torch.nn.Conv1d(
            in_channels=n_features, 
            out_channels=n_filters, 
            kernel_size=n_kernel
        )

    def forward(self, x):
        return self.conv(x)

model = CNN(n_features=1, n_filters=2, n_kernel=1)
model.eval()

dummy_input = torch.randn(1, 1, 32)

torch.onnx.export(
    model,
    dummy_input,
    "conv_fixed.onnx",
    input_names=["input"],
    output_names=["output"],
    opset_version=11
)

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.