cancel
Showing results for 
Search instead for 
Did you mean: 

Error training object_detection using model-zoo-services

milanvdm
Associate II

Im doing a proof-of-concept on trying to run a model on the STM32N6 that does object-detection.

For this, Im following this guide: https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/object_detection/src/training/README.md

 

1. `!git clone https://github.com/STMicroelectronics/stm32ai-modelzoo-services.git`

2. `!pip install -r stm32ai-modelzoo-services/requirements.txt`

3. Im using a Pascal VOC XML dataset which Im converting using the provided scripts with the different config files as attachment in a zip-file

4. `!python converter.py --config-name dataset_config_train.yaml`

5. `!python converter.py --config-name dataset_config_validation.yaml`

6. `!python dataset_create_tfs.py --config-name dataset_config_tfs.yaml`

7. `!python stm32ai_main.py`

 

This gives the following error (full stacktrace as attachment): `ValueError: Can't convert Python sequence with mixed types to Tensor.`.

As I believe Im following the steps correctly, Im unsure what Im missing to correctly train this model?

1 ACCEPTED SOLUTION

Accepted Solutions

Hello @milanvdm ,

 

I did not reproduce your error.

I started from zero and here is what I did so you can follow along:

 

ST MODEL ZOO INSTALL:

First, I installed ST Model Zoo services doing this:

  1. git clone https://github.com/STMicroelectronics/stm32ai-modelzoo-services.git
  2. cd stm32ai-modelzoo-services
  3. python -m venv st_zoo
  4. st_zoo\Scripts\activate.bat
  5. pip install -r requirements.txt

Make sure to use a virtual env as it can be sources of errors.

 

CREATE A DATASET FROM VOC FORMAT:

Then I took your data which is inside the folder /voc and splitted into /train and /validation.

I copied this folder in /object_detection/datasets/

 

Then I edited the dataset_config.yaml in dataset_converter:

  • Change the format
  • Set your class name
  • under pascal_voc_format add the path to your training images and xml
  • In my case, I want the converted dataset in a folder named "converted_voc"
  • run the python script converter.py

 

dataset:
  format: pascal_voc_format
  class_names: [vehicle,license_plate]

coco_format:
  images_path: ../chess_coco/train
  json_annotations_file_path: ../chess_coco/train/_annotations.coco.json
  export_dir: subset_chess/train
  
pascal_voc_format:
  images_path: ../voc/train
  xml_files_path: ../voc/train
  export_dir: ../converted_voc/train

hydra:
  run:
    dir: outputs/${now:%Y_%m_%d_%H_%M_%S}

 

(make sure to activate the virtual env before running any command).

 

With this, you should have in /converted_voc/train jpg files and txt files.

You can do the same for your validation data

 

Next, we use the create_tfs scripts:

  1. Go to /dataset_create_tfs and edit the yaml
  2. Run the python script "dataset_create_tfs.py"

 

dataset:
  dataset_name: test_voc
  training_path: ../converted_voc/train
  validation_path: ../converted_voc/validation
  test_path:

settings:
  max_detections: 20
  
hydra:
  run:
    dir: outputs/${now:%Y_%m_%d_%H_%M_%S}

 

 

Now you should have .tfs files with your jpg and txt files.

 

USING ST MODEL ZOO

Next is using model zoo:

  1. Select a model for object detection in model zoo
  2. I used this one: stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192 at main · STMicroelectronics/stm32ai-modelzoo · GitHub
  3. Download the .h5 and ssd_mobilenet_v2_fpnlite_035_192_config.yaml
  4. Copy the .h5 model to /object_detection/pretrained_models
  5. Copyt the ssd_mobilenet_v2_fpnlite_035_192_config.yaml to /object_detection/src

 

Then edit the ssd_mobilenet_v2_fpnlite_035_192_config.yaml:

  1. Change the operation mode to training
  2. Change the classes with your classes
  3. Change the data paths
  4. Run the command: python stm32ai_main.py --config-name ssd_mobilenet_v2_fpnlite_035_192_config.yaml

This should work.

 

If this worked, you are free to change anything to do what you planned to do.

 

To use the N6, you will need to download the getting started here: 

https://www.st.com/en/development-tools/stm32n6-ai.html

And follow this: https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/object_detection/deployment/README_STM32N6.md 

 

Have a good day,

Julian

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

11 REPLIES 11
MCHTO.1
ST Employee

Hello,

Could you please share your installed python packages using :

!pip list

 

Package                      Version
---------------------------- ------------
absl-py                      2.1.0
alembic                      1.14.0
antlr4-python3-runtime       4.9.3
appdirs                      1.4.4
archspec                     0.2.2
astunparse                   1.6.3
audioread                    3.0.1
blinker                      1.9.0
boltons                      23.1.1
Brotli                       1.1.0
cachetools                   5.5.0
certifi                      2024.12.14
cffi                         1.16.0
charset-normalizer           3.3.2
click                        8.1.8
cloudpickle                  2.2.1
cmaes                        0.11.1
colorama                     0.4.5
coloredlogs                  15.0.1
colorlog                     6.9.0
conda                        23.11.0
conda-libmamba-solver        23.12.0
conda-package-handling       2.2.0
conda_package_streaming      0.9.0
contourpy                    1.3.1
cycler                       0.12.1
databricks-cli               0.18.0
decorator                    5.1.1
distro                       1.8.0
docker                       6.1.3
entrypoints                  0.4
Flask                        2.3.3
flatbuffers                  2.0.7
fonttools                    4.55.3
gast                         0.6.0
gitdb                        4.0.12
GitPython                    3.1.44
google-auth                  2.37.0
google-auth-oauthlib         0.4.6
google-pasta                 0.2.0
greenlet                     3.1.1
grpcio                       1.68.1
gunicorn                     20.1.0
h5py                         3.12.1
humanfriendly                10.0
hydra-core                   1.3.2
idna                         3.6
imageio                      2.36.1
imgaug                       0.4.0
importlib-metadata           6.11.0
itsdangerous                 2.2.0
Jinja2                       3.1.5
joblib                       1.2.0
jsonpatch                    1.33
jsonpointer                  2.4
keras                        2.8.0
Keras-Preprocessing          1.1.2
kiwisolver                   1.4.8
larq                         0.13.3
lazy_loader                  0.4
libclang                     18.1.1
libmambapy                   1.5.5
librosa                      0.10.0.post2
llvmlite                     0.43.0
Mako                         1.2.4
mamba                        1.5.5
Markdown                     3.7
MarkupSafe                   3.0.2
marshmallow                  3.20.1
matplotlib                   3.6.2
menuinst                     2.0.1
mlflow                       2.3.0
mpmath                       1.3.0
msgpack                      1.1.0
munch                        2.5.0
networkx                     3.4.2
numba                        0.60.0
numpy                        1.23.4
nvidia-cublas-cu11           11.11.3.6
nvidia-cudnn-cu11            8.6.0.163
oauthlib                     3.2.2
omegaconf                    2.3.0
onnx                         1.12.0
onnxconverter-common         1.13.0
onnxruntime                  1.15.1
opencv-python                4.6.0.66
opt_einsum                   3.4.0
optuna                       3.1.1
packaging                    23.2
pandas                       1.5.3
pillow                       11.1.0
pip                          23.3.2
platformdirs                 4.1.0
pluggy                       1.3.0
pooch                        1.6.0
protobuf                     3.19.6
pyarrow                      11.0.0
pyasn1                       0.6.1
pyasn1_modules               0.4.1
pycosat                      0.6.6
pycparser                    2.21
PyJWT                        2.10.1
pyparsing                    3.2.1
pyserial                     3.5
PySocks                      1.7.1
python-dateutil              2.9.0.post0
pytz                         2023.4
PyYAML                       6.0.2
querystring-parser           1.2.4
requests                     2.28.2
requests-oauthlib            2.0.0
rsa                          4.9
ruamel.yaml                  0.18.5
ruamel.yaml.clib             0.2.7
scikit-image                 0.24.0
scikit-learn                 1.2.2
scipy                        1.13.1
seaborn                      0.12.2
setuptools                   68.2.2
shapely                      2.0.6
six                          1.17.0
skl2onnx                     1.14.0
smmap                        5.0.2
soundfile                    0.13.0
soxr                         0.5.0.post1
SQLAlchemy                   2.0.36
sqlparse                     0.5.3
sympy                        1.13.3
tabulate                     0.9.0
tensorboard                  2.8.0
tensorboard-data-server      0.6.1
tensorboard-plugin-wit       1.8.1
tensorflow                   2.8.3
tensorflow-estimator         2.8.0
tensorflow-io-gcs-filesystem 0.37.1
termcolor                    2.5.0
terminaltables               3.1.10
tf2onnx                      1.14.0
threadpoolctl                3.5.0
tifffile                     2024.12.12
tqdm                         4.65.0
truststore                   0.8.0
typing_extensions            4.12.2
urllib3                      1.26.13
websocket-client             1.8.0
Werkzeug                     3.1.3
wget                         3.2
wheel                        0.42.0
wrapt                        1.17.0
xmlrunner                    1.7.7
zipp                         3.21.0
zstandard                    0.22.0
milanvdm
Associate II

@MCHTO.1 Just checking in if you were able to reproduce this issue we are facing or if you would need more information?

milanvdm
Associate II

@MCHTO.1 Kind reminder as our STM32N6 DK arrived today but we are still blocked on this issue as we cannot train the model on our own data. 

Hello @milanvdm ,

 

Can you link me where you found your dataset, I will try to reproduce on my side.

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
milanvdm
Associate II

@Julian E. Its not a public dataset but I will create a public Google Colab and try to reproduce it with a public dataset!

milanvdm
Associate II

Google Colab has stopped supporting Tensorflow versions <2.12 so I could not fully test this out (might be worth looking into upgrading to a higher TF in that case!).

Ive attached a zip file with the Colab script and a public dataset. Hopefully this can help you reproduce the issue. Let me know if you would need more information!

Hello @milanvdm ,

 

I did not reproduce your error.

I started from zero and here is what I did so you can follow along:

 

ST MODEL ZOO INSTALL:

First, I installed ST Model Zoo services doing this:

  1. git clone https://github.com/STMicroelectronics/stm32ai-modelzoo-services.git
  2. cd stm32ai-modelzoo-services
  3. python -m venv st_zoo
  4. st_zoo\Scripts\activate.bat
  5. pip install -r requirements.txt

Make sure to use a virtual env as it can be sources of errors.

 

CREATE A DATASET FROM VOC FORMAT:

Then I took your data which is inside the folder /voc and splitted into /train and /validation.

I copied this folder in /object_detection/datasets/

 

Then I edited the dataset_config.yaml in dataset_converter:

  • Change the format
  • Set your class name
  • under pascal_voc_format add the path to your training images and xml
  • In my case, I want the converted dataset in a folder named "converted_voc"
  • run the python script converter.py

 

dataset:
  format: pascal_voc_format
  class_names: [vehicle,license_plate]

coco_format:
  images_path: ../chess_coco/train
  json_annotations_file_path: ../chess_coco/train/_annotations.coco.json
  export_dir: subset_chess/train
  
pascal_voc_format:
  images_path: ../voc/train
  xml_files_path: ../voc/train
  export_dir: ../converted_voc/train

hydra:
  run:
    dir: outputs/${now:%Y_%m_%d_%H_%M_%S}

 

(make sure to activate the virtual env before running any command).

 

With this, you should have in /converted_voc/train jpg files and txt files.

You can do the same for your validation data

 

Next, we use the create_tfs scripts:

  1. Go to /dataset_create_tfs and edit the yaml
  2. Run the python script "dataset_create_tfs.py"

 

dataset:
  dataset_name: test_voc
  training_path: ../converted_voc/train
  validation_path: ../converted_voc/validation
  test_path:

settings:
  max_detections: 20
  
hydra:
  run:
    dir: outputs/${now:%Y_%m_%d_%H_%M_%S}

 

 

Now you should have .tfs files with your jpg and txt files.

 

USING ST MODEL ZOO

Next is using model zoo:

  1. Select a model for object detection in model zoo
  2. I used this one: stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192 at main · STMicroelectronics/stm32ai-modelzoo · GitHub
  3. Download the .h5 and ssd_mobilenet_v2_fpnlite_035_192_config.yaml
  4. Copy the .h5 model to /object_detection/pretrained_models
  5. Copyt the ssd_mobilenet_v2_fpnlite_035_192_config.yaml to /object_detection/src

 

Then edit the ssd_mobilenet_v2_fpnlite_035_192_config.yaml:

  1. Change the operation mode to training
  2. Change the classes with your classes
  3. Change the data paths
  4. Run the command: python stm32ai_main.py --config-name ssd_mobilenet_v2_fpnlite_035_192_config.yaml

This should work.

 

If this worked, you are free to change anything to do what you planned to do.

 

To use the N6, you will need to download the getting started here: 

https://www.st.com/en/development-tools/stm32n6-ai.html

And follow this: https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/object_detection/deployment/README_STM32N6.md 

 

Have a good day,

Julian

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

@Julian E. 

I'm using ubuntu environment. After installing the conda environmen environment according to the wiki, I used the same dataset and path, and reported an error after running converter.py, as shown in the figure below. How to solve it?

You can reproduce the problem after pip install -r requirements.txt and run python converter.py dataset_config.yaml.

SullyNiu_3-1739439362908.png

SullyNiu_1-1739438615108.png

SullyNiu_0-1739438365218.png

I found a typo, python is not pyhton.

SullyNiu_2-1739438856826.png