2025-07-04 3:56 AM
Hello,
I am trying to deploy a yolov8n model on STM32N6570-DK.
For that I tried to follow the instructions in https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/object_detection/deployment/doc/tuto/How_to_deploy_yolov8_yolov5_object_detection.md
I managed to train the model using ultralytics yolo and I have a .pt project. However I was not able to execute the
yolo export model=yolov8n.pt format=tflite imgsz=256 int8=True
command, it produced some errors. But I was able to get a .onnx model, which I quantized using the ST scripts in model zoo services. I was able to validate the model and also use prediction on actual images and get correct results.
To do that I had to apply -inputs-ch-position chfirst in the prediction params. The prediction worked well in all modes, host, stedgeai_host and stedgeai_n6.
Now I am trying to deploy the model on the board. I used the deployment operation mode in the scripts and it was done successfully. However the model does not run as expected, the postprocess function produces results that cannot be displayed on the LCD.
So I am wondering, is it something that I must do regarding the chfirst/ chlast position in the inputs/outputs? In the Python code for prediction I saw this transpose function
channel_first_images = np.transpose(images.numpy(), [0, 3, 1, 2])
before sending the data to the ai_runner. Should I do something in the C code also? I am using the object_detection application project that is in the model zoo services.
I tried to understand this article
https://stedgeai-dc.st.com/assets/embedded-docs/how_to_change_io_data_type_format.html
but it is not clear to me what must be done
Thanks in advance
2025-07-10 7:03 AM
Hello @mtv,
To clarify a few points:
The option - -inputs-ch-position chfirst will indicate to the stedge ai core to generate a c model with an input in channel first. Not that the model that you are giving the stedgeai core is chanel first.
In this case, if the model you give is channel first, the stedgeai core will not do anything. If your model is channel last, it will add a layer to generate a model that is indeed channel first.
In the case of the application from the model zoo. The sensor (camera) is sending channel last data. Thus, we use the option of the stedgeai core to use channel last model.
In your message, you say:
"To do that I had to apply -inputs-ch-position chfirst in the prediction params. The prediction worked well in all modes, host, stedgeai_host and stedgeai_n6."
What does it mean exactly?
If you are doing validation with Random Data, this option should not have any influence on the results. Being channel first or last is the same in this case.
If you are doing validation with channel first data, you indeed need this option, else the shape will not match. But in regard to model zoo, this is not relevant as model zoo uses channel last data.
Then, when you say you have issue with the LCD display, what is it exactly? You get the image of the camera but no bounding box? Wrong bounding boxes? Can you please clarify a bit?
For the AI_runner issues, I propose to first look at the other points.
Have a good day,
Julian
2025-07-11 9:18 AM
Hi Julian
Thank you for your response. Finally I managed to export a .tfile model using the ultralytics yolo export command and I was able to deploy that to the board successfully.
However I need to ask something. If we have an .onnx model that is channel first, how can we deploy it in an application that applies the model to the camera data that are channel last? Do we have to change the model before using the stedgeai compiler?
Thanks