2025-07-04 3:56 AM
Hello,
I am trying to deploy a yolov8n model on STM32N6570-DK.
For that I tried to follow the instructions in https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/object_detection/deployment/doc/tuto/How_to_deploy_yolov8_yolov5_object_detection.md
I managed to train the model using ultralytics yolo and I have a .pt project. However I was not able to execute the
yolo export model=yolov8n.pt format=tflite imgsz=256 int8=True
command, it produced some errors. But I was able to get a .onnx model, which I quantized using the ST scripts in model zoo services. I was able to validate the model and also use prediction on actual images and get correct results.
To do that I had to apply -inputs-ch-position chfirst in the prediction params. The prediction worked well in all modes, host, stedgeai_host and stedgeai_n6.
Now I am trying to deploy the model on the board. I used the deployment operation mode in the scripts and it was done successfully. However the model does not run as expected, the postprocess function produces results that cannot be displayed on the LCD.
So I am wondering, is it something that I must do regarding the chfirst/ chlast position in the inputs/outputs? In the Python code for prediction I saw this transpose function
channel_first_images = np.transpose(images.numpy(), [0, 3, 1, 2])
before sending the data to the ai_runner. Should I do something in the C code also? I am using the object_detection application project that is in the model zoo services.
I tried to understand this article
https://stedgeai-dc.st.com/assets/embedded-docs/how_to_change_io_data_type_format.html
but it is not clear to me what must be done
Thanks in advance
Solved! Go to Solution.
2025-07-15 8:42 AM
Hello @mtv,
The stedgeai core detect if your model is channel first or channel last. The option - -inputs-ch-position is to fix what the output generated model will be.
Indeed, if you have a channel first model and use the option - -inputs-ch-position chlast, it will add a transpose affecting the performances. Depending on the model, it may be negligeable. But working with an already channel last model is better for sure.
@Eileen, if the answer of @mtv is not enough to help you, please create a new thread to keep things clear for future readers.
Have a good day,
Julian
2025-07-10 7:03 AM
Hello @mtv,
To clarify a few points:
The option - -inputs-ch-position chfirst will indicate to the stedge ai core to generate a c model with an input in channel first. Not that the model that you are giving the stedgeai core is chanel first.
In this case, if the model you give is channel first, the stedgeai core will not do anything. If your model is channel last, it will add a layer to generate a model that is indeed channel first.
In the case of the application from the model zoo. The sensor (camera) is sending channel last data. Thus, we use the option of the stedgeai core to use channel last model.
In your message, you say:
"To do that I had to apply -inputs-ch-position chfirst in the prediction params. The prediction worked well in all modes, host, stedgeai_host and stedgeai_n6."
What does it mean exactly?
If you are doing validation with Random Data, this option should not have any influence on the results. Being channel first or last is the same in this case.
If you are doing validation with channel first data, you indeed need this option, else the shape will not match. But in regard to model zoo, this is not relevant as model zoo uses channel last data.
Then, when you say you have issue with the LCD display, what is it exactly? You get the image of the camera but no bounding box? Wrong bounding boxes? Can you please clarify a bit?
For the AI_runner issues, I propose to first look at the other points.
Have a good day,
Julian
2025-07-11 9:18 AM
Hi Julian
Thank you for your response. Finally I managed to export a .tfile model using the ultralytics yolo export command and I was able to deploy that to the board successfully.
However I need to ask something. If we have an .onnx model that is channel first, how can we deploy it in an application that applies the model to the camera data that are channel last? Do we have to change the model before using the stedgeai compiler?
Thanks
2025-07-15 12:56 AM
Hello @mtv,
In the demo application:
If you have a channel first model, you need to use the option - -inputs-ch-position chlast to generate a C model that uses channel last input data. For the rest, it should be the same.
If you use the deployment from model zoo (that is based on the demo applications), this option is used by default.
That should be it. In any case, you can look at the default model used (or any other available in model zoo) and try to replicate the inputs and outputs with the model you want to use. It should only be adding transpose layers.
Have a good day,
Julian
2025-07-15 6:44 AM - edited 2025-07-15 6:56 AM
Hello,
May I ask if the .tflite model you deployed to the board can run successfully? The model I deployed is not running properly, and no bounding boxes are displayed on the LCD. I'm not sure what caused it either.
I think the problem I encountered might be similar to the one you faced before. Could you tell me roughly how you solved it?
Thank you very much.
2025-07-15 7:48 AM
Hi Julian,
Thank you for the answer. So if I understand correctly the option - -inputs-ch-position chlast would add a transpose layer to make the input channel first as is required by the model. I am guessing that this would affect the inference time of the model. So it is best to have a channel last model similar to the camera data. Am I correct?
Thanks
2025-07-15 8:07 AM
Hello,
Yes the .tflite model works as expected. As I mentioned in my original post I followed the instructions and trained a yolov8n model using ultralytics yolo package. Then I was able to take the .tflite using the following yolo command:
yolo export model=yolov8n.pt format=tflite imgsz=256 int8=True opset=17
After that I used the instructions in
to quantize the .tflite model.
Before deploying the model I used the prediction mode in the scripts provided in the ST zoo services to make sure that the model can detect objects on some photos. This way I was able to finetune the parameters needed by the postprocessing routines that actually provide the bounding boxes.
Finally I used the deployment mode in the scripts to deploy the object detection project on the board. And that works.
I hope that is somehow helpful for you.
Good luck
2025-07-15 8:42 AM
Hello @mtv,
The stedgeai core detect if your model is channel first or channel last. The option - -inputs-ch-position is to fix what the output generated model will be.
Indeed, if you have a channel first model and use the option - -inputs-ch-position chlast, it will add a transpose affecting the performances. Depending on the model, it may be negligeable. But working with an already channel last model is better for sure.
@Eileen, if the answer of @mtv is not enough to help you, please create a new thread to keep things clear for future readers.
Have a good day,
Julian
2025-07-15 11:34 AM
Hello,
Thank you for the answer. Based on your reply, I found that I ignored a file. Now I have successfully deployed the model and it is running normally.
Thank you again for your help, and I'm very grateful.