cancel
Showing results for 
Search instead for 
Did you mean: 

Yolov8_n deployment on X-CUBE-N6-AI people detection → black screen

HarisHer
Associate II

 

Hi,

After deploying Yolov8_n to x-cube-n6-ai-people-detection app stops working, I only get a black screen.

I generated the TFLite model
using the following script:

from ultralytics import YOLO

# load model
model = YOLO("yolov8n.pt")

# export to TFLite
model.export(format="tflite", imgsz=320, int8=True, nms=False)

 

Then I took:
yolov8n_full_integer_quant.tflite

 

and ran:

stedgeai generate --no-inputs-allocation --no-outputs-allocation \
  --model yolov8n_full_integer_quant.tflite \
  --target stm32n6 \
  --st-neural-art default@user_neuralart.json

cp st_ai_output/network_ecblobs.h .
cp st_ai_output/network.c .
cp st_ai_output/network_atonbuf.xSPI2.raw network_data.xSPI2.bin

arm-none-eabi-objcopy -I binary network_data.xSPI2.bin \
  --change-addresses 0x70380000 -O ihex network_data.hex

 

After that I flashed the data:

STM32_Programmer_CLI.exe -c port=SWD mode=HOTPLUG -el "$env:DKEL" -hardRst -w network_data.hex

Inside CubeIDE, when I run the app it is extremely slow and I never see frames from the camera.

app config

HarisHer_1-1756470229622.png

 

postprocess config

HarisHer_0-1756470186992.png

 

When I do the same with st_yolo_x_nano_480_1.0_0.25_3_int8 and set:

 

#define POSTPROCESS_TYPE POSTPROCESS_OD_ST_YOLOX_UF

 everything works perfectly well.

Has anyone managed to run Yolov8_n on the STM32N6570-DK with people/vehicle detection?
Any hints on postprocess config, arena/memory pool size, or model export settings would be greatly appreciated.

Thanks in advance!

 

 

1 REPLY 1
Julian E.
ST Employee

Hi @HarisHer,

 

Sorry for the very late answer...


Here is the config I used for this model:

ultralytics/examples/YOLOv8-STEdgeAI/stedgeai_models/object_detection/yolov8n_320_quant_pc_uf_od_coco-person.tflite at main · stm32-hotspot/ultralytics

/**
******************************************************************************
* @file    app_config.h
* @author  GPM Application Team
*
******************************************************************************
* @attention
*
* Copyright (c) 2023 STMicroelectronics.
* All rights reserved.
*
* This software is licensed under terms that can be found in the LICENSE file
* in the root directory of this software component.
* If no LICENSE file comes with this software, it is provided AS-IS.
*
******************************************************************************
*/

/* ---------------    Generated code    ----------------- */
#ifndef APP_CONFIG
#define APP_CONFIG

#include "arm_math.h"

#define USE_DCACHE

/*Defines: CMW_MIRRORFLIP_NONE; CMW_MIRRORFLIP_FLIP; CMW_MIRRORFLIP_MIRROR; CMW_MIRRORFLIP_FLIP_MIRROR;*/
#define CAMERA_FLIP CMW_MIRRORFLIP_NONE

#define ASPECT_RATIO_CROP (1) /* Crop both pipes to nn input aspect ratio; Original aspect ratio kept */
#define ASPECT_RATIO_FIT (2) /* Resize both pipe to NN input aspect ratio; Original aspect ratio not kept */
#define ASPECT_RATIO_FULLSCREEN (3) /* Resize camera image to NN input size and display a fullscreen image */
#define ASPECT_RATIO_MODE ASPECT_RATIO_CROP

/* Postprocessing type configuration */
#define POSTPROCESS_TYPE    POSTPROCESS_OD_YOLO_V8_UI

#define NN_HEIGHT     (320)
#define NN_WIDTH      (320)
#define NN_BPP 3

#define COLOR_BGR (0)
#define COLOR_RGB (1)
#define COLOR_MODE COLOR_RGB
/* Classes */
#define NB_CLASSES   (1)
#define CLASSES_TABLE const char* classes_table[NB_CLASSES] = {\
   "person"}\

/* Postprocessing YOLO_V8 configuration */
#define AI_OD_YOLOV8_PP_NB_CLASSES        (1)
#define AI_OD_YOLOV8_PP_TOTAL_BOXES       (2100)
#define AI_OD_YOLOV8_PP_MAX_BOXES_LIMIT   (10)
#define AI_OD_YOLOV8_PP_CONF_THRESHOLD    (0.5)
#define AI_OD_YOLOV8_PP_IOU_THRESHOLD     (0.5)
#define WELCOME_MSG_1         "yolov8n_320_quant_pc_uf_od_coco-person.tflite"
#define WELCOME_MSG_2       "Model Running in STM32 MCU internal memory"

#endif      /* APP_CONFIG */

 

 In your case, because you have 80 class, you need to change:

  • NB_CLASSES to 80, same for AI_OD_YOLOV8_PP_NB_CLASSES
  • CLASSES_TABLE with your classes

 

I used the ST Model Zoo services for that:

STMicroelectronics/stm32ai-modelzoo-services: AI Model Zoo services for STM32 devices

 

For the yaml, I used this one as a starting point:

ultralytics/examples/YOLOv8-STEdgeAI/stedgeai_models/gesture_detection/user_config_yolov8n_hagrid_gesture_deploy.yaml at main · stm32-hotspot/ultralytics

As explained here:

ultralytics/examples/YOLOv8-STEdgeAI at main · stm32-hotspot/ultralytics

 

I took the generated app_config.h in \stm32ai-modelzoo-services\application_code\object_detection\STM32N6\Application\STM32N6570-DK\Inc.

 

So the complete flow I used to deploy the model linked above manually is this:

stedgeai generate --model yolov8n_320_quant_pc_uf_od_coco-person.tflite --target stm32n6 --st-neural-art default@user_neuralart_STM32N6570-DK.json --input-data-type uint8 --output-data-type int8
cp st_ai_output/network.c STM32N6570-DK/
cp st_ai_output/network_ecblobs.h STM32N6570-DK/
cp st_ai_output/network_atonbuf.xSPI2.raw STM32N6570-DK/network_data.xSPI2.bin
arm-none-eabi-objcopy -I binary STM32N6570-DK/network_data.xSPI2.bin --change-addresses 0x70380000 -O ihex STM32N6570-DK/network_data.hex
  • Then I edited the app_config from in STM32N6-GettingStarted-ObjectDetection\Application\STM32N6570-DK\Inc with the one above.
  • Flashed the weight generated with the previous command
  • Opened the application in STM32CubeIDE and run the project (STM32N6-GettingStarted-ObjectDetection\Application\STM32N6570-DK\STM32CubeIDE)

 

I hope this helps you.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.