cancel
Showing results for 
Search instead for 
Did you mean: 

How to use yolov8 on STM32H747

dogg
Associate II

Hi,

 

I have successfully ran and trained the models in modelzoo but I would like to use a larger model with yolov8 for higher resolution.

 

It was mentioned in another post that yolov5 and above is not compatible with ST chips at the moment.

 

In any case can how can I use a previous model let's say v4? The available models seem to be:

st_ssd_mobilenet_v1
ssd_mobilenet_v2_fpnlite
st_yolo_lc_v1
tiny_yolo_v2
 
What would the .yaml file look like. Can I just use the .tflite file from yolov4 and use one of the yolo derivatives from the above list?
 
Alternatively, can I increase the input matrix on the available models to be larger than 416x416?
 
I am using the h747 disco board.
 
thanks
12 REPLIES 12
Julian E.
ST Employee

Hello @dogg,

For the moment, as you said, only the following models are available for object detection:

st_ssd_mobilenet_v1
ssd_mobilenet_v2_fpnlite
st_yolo_lc_v1
tiny_yolo_v2
 
To deploy it, you can look at this configuration example:
 
Simply put the path to the .tflite model you want to deploy.
 
Regarding input size, you can change it, but you will need to retrain the model:
  • for tiny yolov2 type the input size should be a multiple of 32,
  • for st__yolo_lc_v1 the input size should be  a  multiple of 16.
  • the correct input shapes are 192, 224, 256 for the ssdv1

  • 192, 224, 256, 416 for ssdv2

In the yaml, in the training part, you have a input_size field.
 
 
 
Have a good day,
Julian

In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

When I choose 1024x1024 for input size, I get a self.input_shape_list = None at line 296 of tiny_yolo_v2_preprocess.py

 

thanks

Julian E.
ST Employee

Can you share your user_confi.yaml please?

 

When you say 1024x1024, you mean something like that?

input_shape: (1024, 1024, 3)

 

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
general:
  project_name: COCO_2017_person_Demo
  model_type: tiny_yolo_v2
  #model_path: C:/Users/Haris/Desktop/stm32ai-modelzoo/object_detection/pretrained_models/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_80_classes/ssd_mobilenet_v2_fpnlite_100_416/ssd_mobilenet_v2_fpnlite_100_416_int8.tflite # C:/Users/Haris/Desktop/stm32ai-modelzoo/object_detection/pretrained_models/st_ssd_mobilenet_v1/ST_pretrainedmodel_public_dataset/coco_2017_person/st_ssd_mobilenet_v1_025_256/quantized_model.tflite
  logs_dir: logs
  saved_models_dir: saved_models
  gpu_memory_limit: 12
  global_seed: 127

operation_mode: chain_tqeb
#choices=['training' , 'evaluation', 'deployment', 'quantization', 'benchmarking',
#        'chain_tqeb','chain_tqe','chain_eqe','chain_qb','chain_eqeb','chain_qd ']

# dataset:
#   name: coco_2017_80_classes
#   class_names: [ aeroplane,bicycle,bird,boat,bottle,bus,car,cat,chair,cow,diningtable,dog,horse,motorbike,person,pottedplant,sheep,sofa,train,tvmonitor ] # Names of the classes in the dataset.
#   training_path:
#   validation_path:
#   test_path:
#   quantization_path:
#   quantization_split: 0.3

dataset:
  name: bugs                                    # Dataset name. Optional, defaults to "<unnamed>". [nc, mr, wf] #
  class_names: [nc, mr, wf] # [ aeroplane,bicycle,bird,boat,bottle,bus,car,cat,chair,cow,diningtable,dog,horse,motorbike,person,pottedplant,sheep,sofa,train,tvmonitor ] # Names of the classes in the dataset.
  training_path: C:/Users/Haris/Desktop/stm32ai-modelzoo/object_detection/src/bugs/train
  validation_path: C:/Users/Haris/Desktop/stm32ai-modelzoo/object_detection/src/bugs/valid 
  validation_split: 0.2                                      # Training/validation sets split ratio.
  test_path:
  quantization_path: C:/Users/Haris/Desktop/stm32ai-modelzoo/object_detection/src/bugs/quant
  # quantization_split:                                        # Quantization split ratio.
  seed: 123                                                  # Random generator seed used when splitting a dataset.

preprocessing:
  rescaling: { scale: 1/127.5, offset: -1 }
  resizing:
    aspect_ratio: fit
    interpolation: nearest
  color_mode: rgb

data_augmentation:
  rotation: 30
  shearing: 15
  translation: 0.1
  vertical_flip: 0.5
  horizontal_flip: 0.2
  gaussian_blur: 3.0
  linear_contrast: [ 0.75, 1.5 ]

training:
  model:
    type: tiny_yolo_v2 #st_ssd_mobilenet_v1
    alpha: 0.35
    input_shape: (1024, 1024, 3)
    weights: None
    #pretrained_weights: imagenet
  dropout:
  batch_size: 2
  epochs: 2
  optimizer:
    Adam:
      learning_rate: 0.001
  callbacks:
    ReduceLROnPlateau:
      monitor: val_loss
      patience: 20
    EarlyStopping:
      monitor: val_loss
      patience: 40

postprocessing:
  confidence_thresh: 0.6
  NMS_thresh: 0.5
  IoU_eval_thresh: 0.3
  plot_metrics: True   # Plot precision versus recall curves. Default is False.
  max_detection_boxes: 10

quantization:
  quantizer: TFlite_converter
  quantization_type: PTQ
  quantization_input_type: uint8
  quantization_output_type: float
  granularity: per_channel   #per_tensor
  optimize: False   #can be True if per_tensor
  export_dir: quantized_models

benchmarking:
  board: STM32H747I-DISCO

tools:
  stedgeai:
    version: 9.1.0
    optimization: balanced
    on_cloud: False
    path_to_stedgeai: C:/Users/haris/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/9.1.0/Utilities/windows/stedgeai.exe
  path_to_cubeIDE: C:/ST/STM32CubeIDE_1.16.1/STM32CubeIDE/stm32cubeide.exe

deployment:
  c_project_path: ../../stm32ai_application_code/object_detection/
  IDE: GCC
  verbosity: 1
  hardware_setup:
    serie: STM32H7
    board: STM32H747I-DISCO

mlflow:
  uri: ./experiments_outputs/mlruns

hydra:
  run:
    dir: ./experiments_outputs/${now:%Y_%m_%d_%H_%M_%S}

First of all, I think there is an issue here: 

input_shape: (1024, 601024, 3)

Then, is your dataset a public one, can you link it to me if it is the case? I would like to replicate what you are doing, but I don't have a dataset on my pc. 

 

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

That was a typo, I indeed use 1024, 1024, 3.

Dataset is this one.

 

thanks

Julian E.
ST Employee

Thanks, 

I will take a look.

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello,

Can you share your terminal output.

Can you confirm that you have a nvidia GPU properly configured to use tensorflow_gpu on your machine?

Best Regards.

Sure: 

 

[INFO] : Starting training...
6/202 [..............................] - ETA: 2:38 - loss: 2790.8662 Input Shape List: None

 

The print that prints "None" is on line 296 of tiny_yolo_v2_preprocess.py

If is use 608 for the input size then I get this: Input Shape List: [(608, 608)]

 

thanks