2025-07-14 6:33 AM - last edited on 2025-07-14 6:52 AM by Andrew Neil
Hi everyone,
I'm working with an STM32N6 board and trying to deploy a custom YOLOv8n model in ONNX format using CubeMX and STM32 Edge AI. I’ve run into a few issues and was hoping someone could help clarify the process.
Here’s what I’ve done so far:
I successfully imported my yolov8n.onnx model into CubeMX + Edge AI.
When I try to test the model using a .PNG image, I get an error saying PNG format is not supported. When I generate the C code using the model, I see a line in network.c where -inf is not defined, causing compilation errors.
My Questions:
How can I test my model using a PNG image, either on the STM32N6 device or on a desktop simulation?
Do I need to convert the PNG to raw input data manually?
What’s the correct way to prepare a PNG image as an input buffer for use with STM32 inference code?
How do I resolve the -inf not defined issue in the generated network.c?
Any help, code snippets, or documentation pointers would be greatly appreciated!
Thanks in advance!