cancel
Showing results for 
Search instead for 
Did you mean: 

INTERNAL ERROR: Order of dimensions of input cannot be interpreted

comlou
Associate II

Im getting this error and dont understand the problem. I tried adding the suggested "--use-onnx-simplifier" from another post in the Web Command line but it didnt look like it took, here is the output:

 

>>> stedgeai analyze --model version-RFB-320.onnx --st-neural-art custom@/tmp/stm32ai_service/7f0992f6-31d1-46c4-892e-ecc0375878b5/profile-4de288e7-27c5-4d83-814b-a8f7038dc635.json --target stm32n6 --optimize.export_hybrid True --name network --workspace workspace --output output ST Edge AI Core v2.2.0-20266 2adc00962 WARNING: Unsupported keys in the current profile custom are ignored: memory_desc > memory_desc is not a valid key anymore, use machine_desc instead INTERNAL ERROR: Order of dimensions of input cannot be interpreted

 

comlou_0-1759371110078.png

I have also tried to use a transpose layer to order the inputs like [1,H, W, C] and that still caused this problem. Is there a way to have more verbose output then what i have, its not super helpful as to what to fix. I tried the transpose and spent lots of time figuring this out and to no evail, because it said it didnt like the order of the input. Please help, thanks.

8 REPLIES 8
Thomas-Carl
Associate

This error usually happens because STM32 AI tools expect the input dimensions in a specific order. By default, models should be in NCHW format [batch, channels, height, width]. If your ONNX is exporting in [1, H, W, C] (NHWC), the tool can’t interpret it correctly.

A few things you can try:

  1. Re-export the PyTorch model → ONNX with --dynamic_axes set, making sure the input is [1,3,H,W].

  2. Run the onnx-simplifier locally before uploading:

     
    python3 -m onnxsim input.onnx output.onnx

    (sometimes the Web UI flag doesn’t actually apply simplification).

  3. Use Netron to inspect your ONNX file and verify input shape order.

  4. If you still get errors, try converting to TFLite first with the correct input layout, then import into STM32Cube.AI.

I’ve run into similar issues when working on game-related optimizations for my own projects, and I documented some at https://tekan3apk.com while experimenting with model conversions. Checking the ONNX input format with Netron usually points you in the right direction.

Julian E.
ST Employee

Hello @comlou,

 

Thank you for your message and models.

 

It seems that there is an issue in the node 469 and the st edge ai core generation:

JulianE_1-1759394308159.png

 

I contacted the dev team to have more information.

I will update you when I get any news.

 

Have a good day,

Julian

 

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Julian,

Thanks for looking into this, it's greatly appreciated. How did you get to that conclusion, is there debug info I'm missing?

Hello @comlou,

 

There is the possibility to print more log by running "export _DEBUG=2" (on a git bash, it may be different depending on the terminal you use).

To be honest, it is mainly for the dev team, you will see that it is very hard to understand. 
I myself can only kind of see where the issue is coming from.

 

So, you did not miss anything. 

 

Here is the comment of the dev team:

The specific issue for this model is the end attribute of the slice layer. The value is 9223372036854775807 which is MAX_INT64. The special case of 9223372036854775807 is recognized in some cases (not sure if it is for attribute, input or both) but MAX_INT64 is never recognized. To be checked but replacing it with 4 (the shape size on that axis) should solve the problem.

 

To comment on that, if you open the model with netron and look at the slice layer, you will see that the "ends" parameter is set to this max value 9223372036854775807.

You can try to use onnx and/or onnx graph surgeon to try editing the graph to set this value to 4 (in the log of the core, you can see that the value it was expecting was 4).

 

I tried some things, but I think I get stuck due to opset/ir versions. 

Could you try on your side, maybe needing to export your model again with different opset version.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Ok I think I understand what you're saying, I'll try to change that to "4" and let you know, thanks for the update.

Julian,

Ok, I was able to make the changes to what turned out to be 3 Slice layers that had the same value, but now I see a new error and dont understand why its showing up. If I compare a Slice layer that I didnt change to one I changed I dont see anything different related to the error. Below are some points of interests that I separated using '=====....=====', Can you see anything wrong here? I used node_462 as a reference because i didnt change that. I also added the latest modified model. Also one important note is that this modified model works fine as expected in a jupyter notebook along with the original model before modifying it.  

Thanks for you help, trying to get all this working for work using the N6 on a possible new product.

 

==========================================================================================================
 

Error:

INTERNAL ERROR: node_469 of type Slice has not parameter orig_steps. Available parameters are dict_keys(['orig_axes', 'orig_starts', 'orig_ends', 'starts', 'ends'])

 

==========================================================================================================
 
Layer node_462 (Slice)
  ID 225 - order 435
  Predecessors
0: node_460(0)
  Successors
0: node_464
  orig_axes: [2]
  orig_ends: [2]
  orig_starts: [0]
  Optimized: []
Layer node_469 (Slice)
  ID 226 - order 436
  Predecessors
0: node_460(0)
  Successors
0: node_471
  orig_axes: [2]
  orig_ends: [4]
  orig_starts: [2]
  Optimized: []
  
==========================================================================================================
Computing all activation shapes of node_462 (Slice)
   In shapes [(BATCH: 1, CH: 4420, H: 4)] - In values [None]
  Resetting shape of node_462
  Computing starts and ends starting from orig_axes, orig_starts, orig_ends
   Found new output shapes: [(BATCH: 1, CH: 4420, H: 2)]
Computing all activation shapes of node_469 (Slice)
   In shapes [(BATCH: 1, CH: 4420, H: 4)] - In values [None]
  Resetting shape of node_469
  Computing starts and ends starting from orig_axes, orig_starts, orig_ends
   Found new output shapes: [(BATCH: 1, CH: 4420, H: 2)]
==========================================================================================================
######### Used to align the opset and ir_version
%pip install onnx==1.15.0
%pip install onnx-graphsurgeon
 
from onnx import __version__, IR_VERSION
from onnx.defs import onnx_opset_version
print(f"onnx.__version__={__version__!r}, opset={onnx_opset_version()}, IR_VERSION={IR_VERSION}")
onnx.__version__='1.15.0', opset=20, IR_VERSION=9
==========================================================================================================
#Used to make the conversion
 
import onnx
import onnx_graphsurgeon as gs
 
# 1. Load the original ONNX model
onnx_file = "version-RFB-320.onnx"
model = onnx.load(onnx_file)
graph = gs.import_onnx(model)
 
for node in graph.nodes:
    if node.op == 'Slice':
        if node.attrs['ends'][0] > 2:
            print(f"Original 'ends' attribute value: {node.attrs['ends']}")
            node.attrs["ends"] = [4]
 
# 5. Clean up and save the modified model
graph.cleanup().toposort() # Clean up unused tensors and nodes, then reorder
onnx.save(gs.export_onnx(graph), "version-RFB-320_modified_order.onnx")
==========================================================================================================

Julian,

Sorry for all this information, but its been bothering me that I fixed the first error and i get this other error that does not appear to be valid. What I did was make a very simple model with some Slices. I get this new error as my past post but here I can reverse the Slices and It does not Error out on the same Slice, it seems like its always pointing to the last Slice. If I reverse the order, it points to the Slice that passed before, this is what tells me its not making sense and it might be the tools chain. Also, please note AGAIN, the original models work elsewhere, even before making any changes. Below is the output of the simplified model. 

 

comlou_0-1759635006432.png

 

 

C:\workspace\stm32_blaze_face>stedgeai analyze --model slice_example.onnx --st-neural-art --target stm32n6 --optimize.export_hybrid True --name network --workspace workspace --output output
ST Edge AI Core v2.2.0-20266 2adc00962
Starting pass initialization type(ACTIVATION_SHAPES_COMPUTER) id(18)
Ended pass initialization type(ACTIVATION_SHAPES_COMPUTER) id(18)
Trying to compute shapes with all the input shape maps
Computing all possible activation shapes
Computing all activation shapes of input (Input)
Setting shape of input
Shape of input is (BATCH: 1, H: 4420, CH: 4)
Shape of input is (H: 1, W: 4420, CH: 4)
Computing all activation shapes of node_201 (Slice)
In shapes [(BATCH: 1, H: 4420, CH: 4)] - In values [None]
Resetting shape of node_201
Computing starts and ends starting from orig_axes, orig_starts, orig_ends
Found new output shapes: [(BATCH: 1, H: 4420, CH: 2)]
In shapes [(H: 1, W: 4420, CH: 4)] - In values [None]
Resetting shape of node_201
Computing starts and ends starting from orig_axes, orig_starts, orig_ends
Found new output shapes: [(H: 1, W: 4420, CH: 2)]
Computing all activation shapes of node_200 (Slice)
In shapes [(BATCH: 1, H: 4420, CH: 4)] - In values [None]
Resetting shape of node_200
Computing starts and ends starting from orig_axes, orig_starts, orig_ends
Found new output shapes: [(BATCH: 1, H: 4420, CH: 2)]
In shapes [(H: 1, W: 4420, CH: 4)] - In values [None]
Resetting shape of node_200
Computing starts and ends starting from orig_axes, orig_starts, orig_ends
Found new output shapes: [(H: 1, W: 4420, CH: 2)]
Computing all activation shapes of boxes (Concat)
In shapes [(BATCH: 1, H: 4420, CH: 2), (BATCH: 1, H: 4420, CH: 2)] - In values [None, None]
Resetting shape of boxes
Axis is CH
Found new output shapes: [(BATCH: 1, H: 4420, CH: 4)]
In shapes [(H: 1, W: 4420, CH: 2), (H: 1, W: 4420, CH: 2)] - In values [None, None]
Resetting shape of boxes
Axis is CH
Found new output shapes: [(H: 1, W: 4420, CH: 4)]
In shapes [(BATCH: 1, H: 4420, CH: 2), (H: 1, W: 4420, CH: 2)] - In values [None, None]
Resetting shape of boxes
Axis is CH
Invalid combination of input shapes
In shapes [(H: 1, W: 4420, CH: 2), (BATCH: 1, H: 4420, CH: 2)] - In values [None, None]
Resetting shape of boxes
Axis is CH
Invalid combination of input shapes
Computed all possible output shapes
Resetting shape of input
Resetting shape of node_201
Resetting shape of node_200
Resetting shape of boxes
Resetting shape of boxes
Resetting shape of node_201
Resetting shape of node_200
Shape of input is (BATCH: 1, H: 4420, CH: 4)
Setting shape (2) of input
Computing definitive activation shapes
Computing activation shapes of input (Input)
In shapes [] - In values []
Output shapes [(BATCH: 1, H: 4420, CH: 4)] - Output values [None]
Computing activation shapes of node_201 (Slice)
In shapes [(BATCH: 1, H: 4420, CH: 4)] - In values [None]
Computing starts and ends starting from orig_axes, orig_starts, orig_ends
Output shapes [(BATCH: 1, H: 4420, CH: 2)] - Output values [None]
Computing activation shapes of node_200 (Slice)
In shapes [(BATCH: 1, H: 4420, CH: 4)] - In values [None]
Computing starts and ends starting from orig_axes, orig_starts, orig_ends
Output shapes [(BATCH: 1, H: 4420, CH: 2)] - Output values [None]
Computing activation shapes of boxes (Concat)
In shapes [(BATCH: 1, H: 4420, CH: 2), (BATCH: 1, H: 4420, CH: 2)] - In values [None, None]
Axis is CH
Output shapes [(BATCH: 1, H: 4420, CH: 4)] - Output values [None]
Printing Intermediate Representation
Printing graph
Printing information about input layers
input
Printing information about tensor inputs
Printing information about output layers
boxes (0)
Printing information about layers - size(4)
Layer input (Input)
ID 0 - order 1
Predecessors
Successors
0: node_200 - Shape: (BATCH: 1, H: 4420, CH: 4)
0: node_201 - Shape: (BATCH: 1, H: 4420, CH: 4)
other_shape_maps: [(H, W, CH)]
Formats
out_0: (FLOAT, 32 bit, C Size: 32 bits)
Optimized: []
Layer node_201 (Slice)
ID 1 - order 2
Predecessors
0: input(0) - Shape: (BATCH: 1, H: 4420, CH: 4)
Successors
0: boxes - Shape: (BATCH: 1, H: 4420, CH: 2)
ends: {CH: 4}
orig_axes: [2]
orig_ends: [4]
orig_starts: [2]
starts: {CH: 2}
Optimized: []
Layer node_200 (Slice)
ID 2 - order 3
Predecessors
0: input(0) - Shape: (BATCH: 1, H: 4420, CH: 4)
Successors
0: boxes - Shape: (BATCH: 1, H: 4420, CH: 2)
ends: {CH: 2}
orig_axes: [2]
orig_ends: [2]
orig_starts: [0]
starts: {CH: 0}
Optimized: []
Layer boxes (Concat)
ID 3 - order 4
Predecessors
0: node_201(0) - Shape: (BATCH: 1, H: 4420, CH: 2)
1: node_200(0) - Shape: (BATCH: 1, H: 4420, CH: 2)
Successors
Output shape: [(BATCH: 1, H: 4420, CH: 4)]
axis: CH
orig_axis: 2
Optimized: []
Printing information about tensors - size(0)
Tensors
Printing graph
Printing c_config
activations[0] size(0):
custom: {}
net_data: {'model_strings': []}
formats: []
functions: []
hybrid_lite: {'arrays': {}, 'classic': {}, 'intqs': {}, 'layers': [], 'tensors': {}}
includes: set()
Layers:
[]
lite_graphs: {}
mem_pools: []
states: []
Tensors:
weights[0] size(0):
Neural_Art_profile: NeuralArtProfileFile(filename=WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/windows/targets/stm32/resources/neural_art.json'), profile_name='default', compiler_path=WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/windows/atonn.exe'), flags=['--native-float', '--mvei', '--cache-maintenance', '--Ocache-opt', '--enable-virtual-mem-pools', '--Os', '--Oauto-sched'], memory_pool=WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/windows/targets/stm32/resources/mpools/stm32n6.mpool'), machine_desc=WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/configs/stm32n6.mdesc'), logger=<Logger NeuralArtProfileFile (INFO)>, profile_required_keys={'memory_pool', 'options'}, profile_optional_keys={'machine_desc'}, _sec_global={}, _profiles={'minimal': {'memory_pool': WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/windows/targets/stm32/resources/mpools/stm32n6.mpool'), 'options': '--native-float --mvei --cache-maintenance --Ocache-opt', 'machine_desc': WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/configs/stm32n6.mdesc')}, 'default': {'memory_pool': WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/windows/targets/stm32/resources/mpools/stm32n6.mpool'), 'options': '--native-float --mvei --cache-maintenance --Ocache-opt --enable-virtual-mem-pools --Os --Oauto-sched', 'machine_desc': WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/configs/stm32n6.mdesc')}, 'allmems--O3': {'memory_pool': WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/windows/targets/stm32/resources/mpools/stm32n6.mpool'), 'options': '--native-float --mvei --cache-maintenance --Ocache-opt --enable-virtual-mem-pools --Os --optimization 3 --Oauto-sched', 'machine_desc': WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/configs/stm32n6.mdesc')}, 'allmems--auto': {'memory_pool': WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/windows/targets/stm32/resources/mpools/stm32n6.mpool'), 'options': '--native-float --mvei --cache-maintenance --Ocache-opt --enable-virtual-mem-pools --Os --Oauto', 'machine_desc': WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/configs/stm32n6.mdesc')}, 'internal-memories-only--default': {'memory_pool': WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/windows/targets/stm32/resources/mpools/stm32n6__internal_memories_only.mpool'), 'options': '--native-float --mvei --cache-maintenance --Ocache-opt --enable-virtual-mem-pools --Os --Oauto-sched', 'machine_desc': WindowsPath('C:/ST/STEdgeAI/2.2/Utilities/configs/stm32n6.mdesc')}})
Printed c_config
WARNING: node_200 is not quantized
WARNING: node_201 is not quantized
WARNING: boxes is not quantized

INTERNAL ERROR: node_200 of type Slice has not parameter orig_steps. Available parameters are dict_keys(['orig_axes', 'orig_starts', 'orig_ends', 'starts', 'ends'])

C:\workspace\stm32_blaze_face>

Julian,

 

Sorry again for all the messages, but I have been trying to use this model for a long time. I finally got it to generate the N6 code. What I was able to do was change all the Slice's to Gather's. I personally think the toolchain has an issue with Slice even though its fine. I have not tested the code on the N6 yet, but have tested the output model from the stm toolchain and it still works on my jupyter notebook, so have confidence, ill update the post with the results. This is what the replacements look like.

 

comlou_0-1759696810339.png

 

If you can keep me up todate on the Slice status that would be great. 

 

A take aways here:

- I the toolchain output when an error occurs, maybe offer the _DEBUG=2 suggestion, this could have avoided me even posting here. I see that it gives further _DEBUGx suggestions with that turned on. 

 

Thanks for the help