cancel
Showing results for 
Search instead for 
Did you mean: 

Custom trained pytorch model yolov11 deployment into STM32N6570-DK

dev8
Associate

Hello,

I am working on deploying a custom PyTorch YOLOv11 model to the STM32N6570-DK board using the STM32 AI Developer Cloud / STM32 AI Model Zoo workflow.

Currently, I have my model trained in PyTorch (.pt format), and I can also export it to ONNX. My goal is to run inference on the STM32N6570-DK with int8 quantization (TFLite or ONNX → STM32 supported format).

Could you please guide me on the following:

  1. What is the recommended workflow to convert a custom YOLOv11 PyTorch model into a format compatible with STM32 AI tools?

  2. Should I first export to ONNX, then quantize to TFLite, and finally use STM32Cube.AI / STEdgeAI tools?

  3. Are there any specific constraints or optimizations needed for YOLOv11 models to run efficiently on STM32N6570-DK?

Any detailed steps, documentation links, or examples would be very helpful.

Thank you in advance!

0 REPLIES 0