2021-09-12 08:09 PM
When I use cubemx.ai to analyze a onnx model ,I get a error tells list index out of range,what should I do? When I analyze another onnx model ,it occur another error 'params set for InstanceNormalization layer'Could anyone help me?Thanks:confounded_face:
2021-09-12 08:12 PM
The model I use is a model which is transformed form pytorch to onnx.The model has no problem in pytorch and onnx.
2021-09-12 08:19 PM
error1:params set for InstanceNormalization layererror2:list index out of range
2021-09-12 11:47 PM
The error about wish can be fixed by 'sudo apt-get install tk'.However I don't know why it will occur error 'list out of range'
2021-09-13 12:21 AM
Can you share the model reproducing the error (or a dummy untrained one)
Thanks
Daniel
2021-09-13 07:49 PM
Thank you for your kindness. I'll write these down below.
These models use pytorch:
1.
class Encoder(nn.Module):
def __init__(self,input_nc=1,ngf=64,C_channel=4, n_downsampling=3,norm_layer=nn.InstanceNorm2d,max_ngf=256, Conv_type="DW"):
assert(n_downsampling>=0)
super(Encoder, self).__init__()
activation = nn.ReLU(True)
model = [nn.ReflectionPad2d(3), my_Conv(input_nc, ngf, kernel_size=7, padding=0, Conv_type=Conv_type, norm_layer=norm_layer, activation=activation), norm_layer(ngf), activation]
##downsample
for i in range(n_downsampling):
mult = 2**i
model += [my_Conv(min(ngf * mult,max_ngf), min(ngf * mult * 2,max_ngf), kernel_size=3, stride=2, padding=1, Conv_type=Conv_type, norm_layer=norm_layer, activation=activation),
norm_layer(min(ngf * mult * 2, max_ngf)), activation]
self.model = nn.Sequential(*model)
self.projection = nn.Sequential(*[my_Conv(min(ngf*(2**n_downsampling),max_ngf),C_channel,kernel_size=3,stride=1,padding=1, Conv_type=Conv_type, norm_layer=norm_layer, activation=activation),norm_layer(C_channel),nn.Sigmoid()])
def forward(self, input):
z = self.model(input)
return self.projection(z)
2.
class Decoder(nn.Module):
def __init__(self,ngf=64,C_channel=4, n_downsampling=3,output_nc=1,n_blocks=3, norm_layer=nn.InstanceNorm2d,padding_type="reflect",max_ngf=256, Conv_type="DW", Dw_Index=None):
assert (n_blocks>=0)
super(Decoder, self).__init__()
activation = nn.ReLU(True)
mult = 2 ** n_downsampling
ngf_dim = min(ngf * mult, max_ngf)
model = [my_Conv(C_channel,ngf_dim,kernel_size=3,stride=1,padding=1, Conv_type=Conv_type, activation=activation, norm_layer=norm_layer),norm_layer(ngf_dim),activation]
for i in range(n_blocks):
model += [ResnetBlock(ngf_dim, padding_type=padding_type, activation=activation, norm_layer=norm_layer, Conv_type=Conv_type)]
for i in range(n_downsampling):
mult = 2 ** (n_downsampling - i)
model += [my_Deconv(min(ngf * mult,max_ngf), min(ngf * mult //2, max_ngf), Conv_type=Conv_type, norm_layer=norm_layer, activation=activation),
norm_layer(min(ngf * mult //2,max_ngf)), activation]
if Conv_type == "NC":
model += [nn.ReflectionPad2d(3), nn.Conv2d(ngf, output_nc,kernel_size=7, padding=0), nn.Tanh()]
else:
model += [nn.ReflectionPad2d(3), DEPTHWISECONV(ngf, output_nc, kernel_size=7, padding=0), nn.Tanh()]
self.model = nn.Sequential(*model)
def forward(self, input):
return self.model(input)
I successfully transformed them form pytorch into onnx.The first problem about 'list index' arises when I use cubemx. AI to convert the Encoder model,and the problem adout 'params set for InstanceNormalization layer' arises when I use cubemx. AI to convert the Decoder model.
2021-12-03 12:09 AM