2025-11-24 1:03 AM - last edited on 2025-11-24 2:13 AM by Andrew Neil
Hello ST team,
I am currently trying to deploy an rPPG (remote photoplethysmography) model to the STM32N6 using the ST Edge AI Developer Cloud.
My model’s input tensor has the following 6-dimensional shape:
B = batch size
8 = temporal frames
3 = feature channels
36x36 = spatial resolution
3 = RGB channels
I uploaded a fully quantized ONNX model (int8_qdq.onnx) through the web UI and ran Analyze using a custom profile.
However, the analysis fails with the following error:
To help reproduce the issue, I have attached my ONNX model file along with this post.
My questions are:
1.Does the current ST Edge AI Core / Developer Cloud support 6-D input tensors?
(e.g., models with multiple temporal + spatial + channel dimensions)
2.If not directly supported, what is the recommended way to reshape or flatten such inputs so that they can be processed by the toolchain?3.Are there any references or documentation describing the maximum supported tensor rank for model inputs on STM32N6?
Thank you very much for your help.
Looking forward to your guidance.
BCPH357
2025-11-28 7:59 AM
Hi @BCPH357,
I opened an internal ticket to solve this. But the first comment is that 5D is the maximum.
looking at the onnx generated by ONNX simplifier (attached) the first reshape reduces dimension from 6 to 4 but the first (batch size) is larger than 1. First 3 convolutions are batched, i.e., batch size is larger than 1. In order to be supported this should be split into parallel convolution sharing the weights. After there is a reshape whose output has batch size of 1 so the rest of the model should be ok (to be verified).
Have a good day,
Julian
2025-11-30 2:34 AM
Hello Julian,
Thank you very much for the quick response and for generating the simplified ONNX model.
I tested the provided simplified_model.onnx on the ST Edge AI Developer Cloud.
However, the Analyze step still fails with the following error:
I used the default settings for STM32N6 in the web UI, and I also verified with the CLI using:
Both UI and CLI produce the same internal error.
Since you mentioned that 5D is the maximum supported input rank, I assume the issue is still related to either:
1. the remaining reshape sequence,
2. the batched convolutions in the early layers, or
3. the overall input dimension configuration that the simplified model still carries.
Could you please advise on the following?
Is there an additional transformation needed to make the simplified ONNX model compatible (e.g., manually splitting the first few batched convolutions into parallel 2D convolutions as you mentioned)?
Would you recommend that I manually flatten or reorder dimensions before exporting the ONNX model, so that the input becomes 4D/5D before entering any convolution?
Is there a known constraint on reshape patterns that could still trigger this internal error, even when the model is reduced to ≤5D?
I have attached the error logs and can also provide the exact workflow or additional intermediate ONNX files if needed.
Thank you again for your assistance—your help is greatly appreciated.
Best regards,
BCPH357