2025-07-04 3:56 AM
Hello,
I am trying to deploy a yolov8n model on STM32N6570-DK.
For that I tried to follow the instructions in https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/object_detection/deployment/doc/tuto/How_to_deploy_yolov8_yolov5_object_detection.md
I managed to train the model using ultralytics yolo and I have a .pt project. However I was not able to execute the
yolo export model=yolov8n.pt format=tflite imgsz=256 int8=True
command, it produced some errors. But I was able to get a .onnx model, which I quantized using the ST scripts in model zoo services. I was able to validate the model and also use prediction on actual images and get correct results.
To do that I had to apply -inputs-ch-position chfirst in the prediction params. The prediction worked well in all modes, host, stedgeai_host and stedgeai_n6.
Now I am trying to deploy the model on the board. I used the deployment operation mode in the scripts and it was done successfully. However the model does not run as expected, the postprocess function produces results that cannot be displayed on the LCD.
So I am wondering, is it something that I must do regarding the chfirst/ chlast position in the inputs/outputs? In the Python code for prediction I saw this transpose function
channel_first_images = np.transpose(images.numpy(), [0, 3, 1, 2])
before sending the data to the ai_runner. Should I do something in the C code also? I am using the object_detection application project that is in the model zoo services.
I tried to understand this article
https://stedgeai-dc.st.com/assets/embedded-docs/how_to_change_io_data_type_format.html
but it is not clear to me what must be done
Thanks in advance