2025-08-29 5:24 AM - edited 2025-08-29 6:35 AM
Hi,
After deploying Yolov8_n to x-cube-n6-ai-people-detection app stops working, I only get a black screen.
I generated the TFLite model
using the following script:
from ultralytics import YOLO
# load model
model = YOLO("yolov8n.pt")
# export to TFLite
model.export(format="tflite", imgsz=320, int8=True, nms=False)
Then I took:
yolov8n_full_integer_quant.tflite
and ran:
stedgeai generate --no-inputs-allocation --no-outputs-allocation \
--model yolov8n_full_integer_quant.tflite \
--target stm32n6 \
--st-neural-art default@user_neuralart.json
cp st_ai_output/network_ecblobs.h .
cp st_ai_output/network.c .
cp st_ai_output/network_atonbuf.xSPI2.raw network_data.xSPI2.bin
arm-none-eabi-objcopy -I binary network_data.xSPI2.bin \
--change-addresses 0x70380000 -O ihex network_data.hex
After that I flashed the data:
STM32_Programmer_CLI.exe -c port=SWD mode=HOTPLUG -el "$env:DKEL" -hardRst -w network_data.hex
Inside CubeIDE, when I run the app it is extremely slow and I never see frames from the camera.
When I do the same with st_yolo_x_nano_480_1.0_0.25_3_int8 and set:
#define POSTPROCESS_TYPE POSTPROCESS_OD_ST_YOLOX_UF
everything works perfectly well.
Has anyone managed to run Yolov8_n on the STM32N6570-DK with people/vehicle detection?
Any hints on postprocess config, arena/memory pool size, or model export settings would be greatly appreciated.
Thanks in advance!