How to use yolov8 on STM32H747
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2024-11-19 4:09 AM - last edited on ‎2024-11-20 3:34 AM by mƎALLEm
Hi,
I have successfully ran and trained the models in modelzoo but I would like to use a larger model with yolov8 for higher resolution.
It was mentioned in another post that yolov5 and above is not compatible with ST chips at the moment.
In any case can how can I use a previous model let's say v4? The available models seem to be:
- Labels:
-
Model Zoo
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2024-11-19 7:07 AM - edited ‎2024-11-20 1:29 AM
Hello @dogg,
For the moment, as you said, only the following models are available for object detection:
Doc: https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/object_detection/deployment
- for tiny yolov2 type the input size should be a multiple of 32,
- for st__yolo_lc_v1 the input size should be a multiple of 16.
-
the correct input shapes are 192, 224, 256 for the ssdv1
-
192, 224, 256, 416 for ssdv2
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2024-11-20 1:48 AM
When I choose 1024x1024 for input size, I get a self.input_shape_list = None at line 296 of tiny_yolo_v2_preprocess.py
thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2024-11-20 2:00 AM - edited ‎2024-11-20 2:00 AM
Can you share your user_confi.yaml please?
When you say 1024x1024, you mean something like that?
input_shape: (1024, 1024, 3)
Julian
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2024-11-20 2:01 AM - edited ‎2024-11-20 2:07 AM
general:
project_name: COCO_2017_person_Demo
model_type: tiny_yolo_v2
#model_path: C:/Users/Haris/Desktop/stm32ai-modelzoo/object_detection/pretrained_models/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_80_classes/ssd_mobilenet_v2_fpnlite_100_416/ssd_mobilenet_v2_fpnlite_100_416_int8.tflite # C:/Users/Haris/Desktop/stm32ai-modelzoo/object_detection/pretrained_models/st_ssd_mobilenet_v1/ST_pretrainedmodel_public_dataset/coco_2017_person/st_ssd_mobilenet_v1_025_256/quantized_model.tflite
logs_dir: logs
saved_models_dir: saved_models
gpu_memory_limit: 12
global_seed: 127
operation_mode: chain_tqeb
#choices=['training' , 'evaluation', 'deployment', 'quantization', 'benchmarking',
# 'chain_tqeb','chain_tqe','chain_eqe','chain_qb','chain_eqeb','chain_qd ']
# dataset:
# name: coco_2017_80_classes
# class_names: [ aeroplane,bicycle,bird,boat,bottle,bus,car,cat,chair,cow,diningtable,dog,horse,motorbike,person,pottedplant,sheep,sofa,train,tvmonitor ] # Names of the classes in the dataset.
# training_path:
# validation_path:
# test_path:
# quantization_path:
# quantization_split: 0.3
dataset:
name: bugs # Dataset name. Optional, defaults to "<unnamed>". [nc, mr, wf] #
class_names: [nc, mr, wf] # [ aeroplane,bicycle,bird,boat,bottle,bus,car,cat,chair,cow,diningtable,dog,horse,motorbike,person,pottedplant,sheep,sofa,train,tvmonitor ] # Names of the classes in the dataset.
training_path: C:/Users/Haris/Desktop/stm32ai-modelzoo/object_detection/src/bugs/train
validation_path: C:/Users/Haris/Desktop/stm32ai-modelzoo/object_detection/src/bugs/valid
validation_split: 0.2 # Training/validation sets split ratio.
test_path:
quantization_path: C:/Users/Haris/Desktop/stm32ai-modelzoo/object_detection/src/bugs/quant
# quantization_split: # Quantization split ratio.
seed: 123 # Random generator seed used when splitting a dataset.
preprocessing:
rescaling: { scale: 1/127.5, offset: -1 }
resizing:
aspect_ratio: fit
interpolation: nearest
color_mode: rgb
data_augmentation:
rotation: 30
shearing: 15
translation: 0.1
vertical_flip: 0.5
horizontal_flip: 0.2
gaussian_blur: 3.0
linear_contrast: [ 0.75, 1.5 ]
training:
model:
type: tiny_yolo_v2 #st_ssd_mobilenet_v1
alpha: 0.35
input_shape: (1024, 1024, 3)
weights: None
#pretrained_weights: imagenet
dropout:
batch_size: 2
epochs: 2
optimizer:
Adam:
learning_rate: 0.001
callbacks:
ReduceLROnPlateau:
monitor: val_loss
patience: 20
EarlyStopping:
monitor: val_loss
patience: 40
postprocessing:
confidence_thresh: 0.6
NMS_thresh: 0.5
IoU_eval_thresh: 0.3
plot_metrics: True # Plot precision versus recall curves. Default is False.
max_detection_boxes: 10
quantization:
quantizer: TFlite_converter
quantization_type: PTQ
quantization_input_type: uint8
quantization_output_type: float
granularity: per_channel #per_tensor
optimize: False #can be True if per_tensor
export_dir: quantized_models
benchmarking:
board: STM32H747I-DISCO
tools:
stedgeai:
version: 9.1.0
optimization: balanced
on_cloud: False
path_to_stedgeai: C:/Users/haris/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/9.1.0/Utilities/windows/stedgeai.exe
path_to_cubeIDE: C:/ST/STM32CubeIDE_1.16.1/STM32CubeIDE/stm32cubeide.exe
deployment:
c_project_path: ../../stm32ai_application_code/object_detection/
IDE: GCC
verbosity: 1
hardware_setup:
serie: STM32H7
board: STM32H747I-DISCO
mlflow:
uri: ./experiments_outputs/mlruns
hydra:
run:
dir: ./experiments_outputs/${now:%Y_%m_%d_%H_%M_%S}
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2024-11-20 2:07 AM
First of all, I think there is an issue here:
input_shape: (1024, 601024, 3)
Then, is your dataset a public one, can you link it to me if it is the case? I would like to replicate what you are doing, but I don't have a dataset on my pc.
Julian
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2024-11-20 2:12 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2024-11-20 2:50 AM
Thanks,
I will take a look.
Julian
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2024-11-20 6:08 AM
Hello,
Can you share your terminal output.
Can you confirm that you have a nvidia GPU properly configured to use tensorflow_gpu on your machine?
Best Regards.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2024-11-20 6:17 AM
Sure:
[INFO] : Starting training...
6/202 [..............................] - ETA: 2:38 - loss: 2790.8662 Input Shape List: None
The print that prints "None" is on line 296 of tiny_yolo_v2_preprocess.py
If is use 608 for the input size then I get this: Input Shape List: [(608, 608)]
thanks
