cancel
Showing results for 
Search instead for 
Did you mean: 

pretrained weights with YOLOX

mls
Associate II

Hi all,

I'm training a ST_yolox model to detect car. In my user_config.yaml i tried to use this : 
pretrained_weights: coco 

When i launch the training script i got this error

raise ValueError("\nUnknown or unsupported attribute. Received `{}`{}".format(attr, message))

ValueError:

Unknown or unsupported attribute. Received `pretrained_weights`

Please check the 'training.model' section of your configuration file.

But if i understand correctly we can use this variable

https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/object_detection/docs/README_TRAINING.md#2-7
If someone can give me some explanation 


 

4 REPLIES 4
Julian E.
ST Employee

Hello @mls,

 

You are right, it is in the doc, but the example taken in the doc is the training of a ssd_mobilenetv1

JulianE_0-1762339498103.png

For this type of model, the pretrained weights is available.

 

For yoloX it is not.

I suggest that, for any model, you start from the yaml in the model folder. In your case, this for example:

stm32ai-modelzoo/object_detection/st_yolo_x/ST_pretrainedmodel_public_dataset/coco_2017_80_classes/st_yolo_x_nano_480 at main · STMicroelectronics/stm32ai-modelzoo

 

In the yaml, you can see what seems to be available or not.

 

Please feel free to share any feedback that you may have for us to improve our tools.

 

have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello @Julian E. 

Thank you for your answer. 

Ah ok ... it's a shame that we can't use pretrained weight on this model ...
Then i have a good dataset. But do you think it's better to use yoloV8 or V11 to detect car ?
In this models (V8 or V11) can i use pretrained weight ?
I don't need fast inference but i need precision ?
Also can i test 480x480 input or 640x640 do you think ?

Hi @mls,

 

For yolov8 and v11, it is used directly with ultralytics

ultralytics/examples/YOLOv8-STEdgeAI at main · stm32-hotspot/ultralytics

 

I believe they offer the possibility to train a model from pretrained weights, then you need to export it.

On the page I linked you should find documentation regarding that.

 

Regarding performance, you should look at benchmark, I think. And do tests as it will depends on your dataset.

 

Then for other input size, I am not sure that the application will support it.

 

Creating a complete firmware is a big task but you can start by training and exporting a model with ultralytics, then benchmark the model with the the AI runner scripts.

 

The AI runner scripts allow you to run the model through python script on your board. You can pass a dataset by editing it and get the output.

 

You will find the scripts in your local install of the st edge ai core in Scripts/.

 

When you have a model, you can use it like this

 

Script/n6script:

  1. stedgeai.exe generate --model ./models/custom_vit_int8.tflite --target stm32n6 --st-neural-art
  2. N6_loader.py to load the test firmware with your model on the board.

 

Script/AI runner:

  1. Edit checker.py (load a numpy array with your input for example and export the output to see them)
  2. Export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (you will see that if you run the checker.py, you will have an error message telling you to do that)
  3. python examples/checker.py -d serial:921600

 

 

https://stedgeai-dc.st.com/assets/embedded-docs/ai_runner_python_module.html

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

 

Thank you @Julian E. .

For the moment i will work with the yolox model i have generated.
Validate on host is ok but when i'm trying to validate on target with cubeMX AI the user interface seems to be stuck on this step ...