cancel
Showing results for 
Search instead for 
Did you mean: 

TOOL ERROR: Error in computation of shapes

jonny1
Associate

I wanted to import a trained onnx into cuba.AI, but an error occurred. I have no idea where the problem lies. After troubleshooting, it was initially determined that there was an issue with the lstm part, but I have no way of knowing exactly where the problem is. I'm looking forward to your explanation. Here is my model code

import torch
from torch import nn
import torch.nn.functional as F
# from torchsummary import summary

from torchinfo import summary
## cnn extract features
class mfcn(nn.Module):

def __init__(self:(
super(mfcn, self).__init__()
self.stage1 = nn.Sequential(
nn.Conv1d(in_channels=1, out_channels=16, kernel_size=3, stride=1, padding=1, groups=1),
nn.BatchNorm1d(16),
nn.Conv1d(16, 16, 3, 1, 1, 2),
nn.BatchNorm1d(16),
nn.ReLU(True),
nn.MaxPool1d(kernel_size=2, stride=2), # 16*320

nn.Conv1d(16, 32, 3, 1, 1, 4),
nn.BatchNorm1d(32),
nn.ReLU(True),
nn.MaxPool1d(2, 2), # 32*160

nn.Conv1d(32, 64, 3, 1, 1, 8),
nn.BatchNorm1d(64),
nn.ReLU(True),
nn.MaxPool1d(2, 2), # 64*80

nn.Conv1d(64, 128, 3, 1, 1, 8),
nn.BatchNorm1d(128),
nn.ReLU(True),
nn.MaxPool1d(2, 2) # 128*28
)

def forward(self, x):
x = x.float()
x = self.stage1(x)

return x


class SimAM_module(torch.nn.Module):
def __init__(self, channels=None, e_lambda=1e-4:(
super(SimAM_module, self).__init__()
self.activation = nn.Sigmoid()
self.e_lambda = e_lambda

def forward(self, x):
b, c, h, w = x.size()
n = w * h - 1
x_minus_mu_square = (x - x.mean(dim=[2, 3], keepdim=True)).pow(2)
y = x_minus_mu_square / (4 * (x_minus_mu_square.sum(dim=[2, 3], keepdim=True) / n + self.e_lambda)) + 0.5
return x * self.activation(y)


class lstm(nn.Module):
def __init__(self, inputs=128, hidden=16, num_layers=1:(
super(lstm, self).__init__() # 对继承自父类的属性进行初始化,并且用父类的初始化方法初始化继承的属性。
self.lstm = nn.LSTM(input_size=inputs,
hidden_size=hidden,
num_layers=num_layers,
batch_first = True,
# bidirectional=True
)
# self.dro = nn.Dropout(p=0.5)
self.lin = nn.Sequential(
nn.Linear(hidden, 7),
nn.Sigmoid()
)

def forward(self, x,batch_size:(
# batch_size = x.size(0)
x = x.view(x.size(0), -1, 128)
_, (h_out,_) = self.lstm(x) # _x is input, size (seq_len, batch, input_size)288*40*16
h_out = h_out.view(x.size(0),-1)
out = self.lin(h_out)
return out




class mfcn_simam_lstm(nn.Module):
def __init__(self:(
super(mfcn_simam_lstm, self).__init__()
self.mfcn = mfcn()
self.simam = SimAM_module()
self.lstm = lstm()

def forward(self, x):
batch_size = x.size(0)
x = self.mfcn(x)
x = x.view(batch_size, 128, 1, -1)
x = self.simam(x)
x = self.lstm(x,batch_size)
return x
1 REPLY 1
Julian E.
ST Employee

Hello @jonny1,

 

As of today, GRU and LSTM are badly supported.

It should be the improved in the version 2.3 (2.2 is coming soon).

 

In the meantime, there are some restrictions about the option to use, you can find them here:

https://stedgeai-dc.st.com/assets/embedded-docs/supported_ops_onnx.html#lstm 

 

An option, that requires a bit of work on your side, is to recreate the LSTM layers with other supported operation. Said differently, instead of using the high-level nn.LSTM module, you manually unroll the LSTM cell into its fundamental matrix multiplications, additions, sigmoid/tanh activations, and elementwise operations.

 

Have a good day,

Julian

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.